In reality, the entire map should be sent first to the verifier (with colors hid behind the post-it) so if it was a bogus randomly colored map, you may find two adjacent points having same color if you try extremely many(think hundreds or thousands in the website's case) times. If you sufficiently try many times and fail to find the adjacent points, you will convince that the prover have the map which is correctly 3-colored.
Note that the entire map is sent again, shuffled its color each time after you choose the two points.
> Making things thread safe for runtime-agnostic utilities like WebSocket is yet another price we pay for making everything multi-threaded by default. The standard way of doing what I'm doing in my code above would be to spawn one of the loops on a separate background task, which could land on a separate thread, meaning we must do all that synchronization to manage reading and writing to a socket from different threads for no good reason.
Why so? Libraries like quinn[1] define "no IO" crate to define runtime-agnostic protocol implementation. In this way we won't suffer by forcing ourselves using synchronization primitives.
Also, IMO it's relatively easy to use Send-bounded future in non-Send(i.o.w. single-threaded) runtime environment, but it's almost impossible to do opposite. Ecosystem users can freely use single threaded async runtime, but ecosystem providers should not. If you want every users to only use single threaded runtime, it's a major loss for the Rust ecosystem.
Typechecked Send/Sync bounds are one of the holy grails that Rust provides. Albeit it's overkill to use multithreaded async runtimes for most users, we should not abandon them because it opens an opportunity for high-end users who might seek Rust for their high-performance backends.
> Also, IMO it's relatively easy to use Send-bounded future in non-Send(i.o.w. single-threaded) runtime environment, but it's almost impossible to do opposite. Ecosystem users can freely use single threaded async runtime, but ecosystem providers should not.
We have Send and non-Send primitives in Rust for a reason. You could use Arc/Mutex/AtomicUsize/... everywhere on a single thread, but you should use Rc/RefCell/Cell<usize>/... instead whenever possible since those are just cheaper. The problem is that in the ecosystem we are building the prevailing assumption is that anything async must also be Send, which means we end up using Send primitives even in non-Send contexts, which is always a waste.
> If you want every users to only use single threaded runtime, it's a major loss for the Rust ecosystem.
Running single threaded executors does not prohibit you from using threads, it just depends on how you want to do that. You can:
1. Have a single async executor running a threadpool that requires _everything_ to be Send.
2. Have a single threadpool, each thread running its own async executor, in which case only stuff that crosses thread boundaries needs to be Send.
The argument is that there are many scenarios where 2 is the optimal solution, both for performance and developer experience, but the ecosystem does not support it well.
I am eager to see what comes of tokio-uring, though. Especially if someone comes up with a good API for reads using shared buffer pools... that one might cause API changes everywhere.
> The problem is that in the ecosystem we are building the prevailing assumption is that anything async must also be Send, which means we end up using Send primitives even in non-Send contexts, which is always a waste.
Honestly I don't think every user want top notch throughput when writing asynchronous Rust applications. Most users would just want the correctness of the Rust type system, and its "lightweight" runtime characteristics compared to CPython, Node.js etc., which can provide fairly good performance.
The thing is, using Arc in single threaded runtime does not greatly harm performance. If it matters, you should be handling 1M+ rps per core and using multithreaded runtime because it scales better, which benefits from using threadsafe primitives.
Might be related: https://arxiv.org/abs/2211.16421
It's a paper about directly learning via JPEG encodings which works well with visusal transformers' patch mechanism.
Note that the entire map is sent again, shuffled its color each time after you choose the two points.