But (unless you've got an excerpt that says otherwise) the Rustonomicon is about unsafe Rust. And I was explaining that safe Rust has data race freedom.
The Rustonomicon is not warning you about scary hidden problems in safe Rust, it's warning you about scary problems you need to care about when writing unsafe Rust, so that your unsafe Rust has appropriate safety rails before anybody else touches it.
// Safety: Can't touch this while anybody else might write
This reminds me of the mutex thing. Look at C++ std::mutex. You could implement exactly that in Rust. But, that's not what std::sync::Mutex is at all. Because if you implemented it in Rust, C++ std::mutex is either useless or unsafe and clearly we'd prefer neither.
But do you have some examples of Rust shared memory IPC that you believe are unsafe? It might be instructive to either show why they're actually safe after all or, alternatively, go add the unsafety explanations and work out what a safe wrapper would look like.
You as a user of a crate deemed safe, that underneath uses shmem, mmap, or a database, written without taking the proper care to prevent other processes to change exactly the same underlying data segment, written in what knows what, are in for a surprise and long debugging sessions.
The crate public API surface is safe after all, and unless the user has experience in distributed systems, the answer won't come right away.
But this hypothetical "written without taking the proper care" code is buggy. Like I said it's just the same situation as a bad implementation of Index but more convoluted.
Rust's standard library takes this very seriously. In many languages if I try to sort() things which refuse to abide by common sense rules like "Having a consistent sort order" the algorithm used may blow up arbitrarily. But Rust's sort() is robust against that. You may create an infinite loop (legal in Rust, causes Undefined Behaviour in C++) and the result of sorting things without a meaningful ordering is unlikely to be helpful if it does finish, but it's guaranteed to be safe, you won't get Undefined Behaviour.
Rust's npm like approach to crates and micro approach to standard library make it a real problem, regardless of the quality approach to the standard library.
You would have a point if the standard library was batteries included.
Cargo-geiger and similar tools allow you to audit crates you depend on to discover whether they're definitely safe.
Of course just because some Rust is unsafe does not mean it's wrong, it just means that you're relying on it being correct as you have to with all code in unsafe languages.
Even those tools don't validate data corruption, using perfectly safe Rust accessing a table row from multiple threads without being protected from a transaction block or a table row lock.
Hence why doing blank statements like Rust prevents data races, without the context when that is actually 100% true, does no favours to the language advocacy.
I think what you're imagining is just "What if people use Rust to write bad SQL queries?" which again, not a data race. Stupid perhaps, unlikely to give them the results they expected, but not a data race.
I am thinking that data races in the scenario of multiple threads accessing a global variable is sold too often, and everything else gets ignored.
While a relevant progress versus what other systems languages are capable of, it still leaves too much out of the table, that tends to be ignored when discussing data consistency safety.
Stuff that usually requires formal methods or TLA+ approaches to guarantee everything goes as smooth as possible.
The context here, right up at the top of the thread where perhaps you've forgotten it, is that (safe) Rust gets you Sequential Consistency, since it has Data Race Freedom.
This makes debugging easier, in the important sense that humans don't seem to be equipped to debug non-trivial programs at all unless they exhibit Sequential Consistency. It's easy enough to write a program for modern computers which doesn't have Sequential Consistency, but it hurts your head too much to debug it.
With your C++ hat on, this might seem like a distinction that doesn't make a difference, lack of Data Race Freedom in a C++ program results in Undefined Behaviour, but so does a buffer overrun, null pointer dereference, signed overflow, and so many other trivial mistakes. So many that as I understand it an entire C++ sub-committee is trying to enumerate them. Thus for a C++ programmer of course any mistake can cause mysterious impossible-to-debug problems so Data Race Freedom doesn't seem important.
Try your Java hat. In Java data races can happen but they don't cause Undefined Behaviour. Write a Java program with a data race. It's hard to reason about what it's doing! It can seem as though some variables take on inexplicable values, or program control flow isn't what you wrote. If you introduce such a race into a complex system you should see that it would be impractical to debug it. Most likely you'd just add mitigations and go home. This is loss of Sequential Consistency in its tamest form, and this is what safe Rust promises to avert.
The Rustonomicon is not warning you about scary hidden problems in safe Rust, it's warning you about scary problems you need to care about when writing unsafe Rust, so that your unsafe Rust has appropriate safety rails before anybody else touches it.
// Safety: Can't touch this while anybody else might write
This reminds me of the mutex thing. Look at C++ std::mutex. You could implement exactly that in Rust. But, that's not what std::sync::Mutex is at all. Because if you implemented it in Rust, C++ std::mutex is either useless or unsafe and clearly we'd prefer neither.
But do you have some examples of Rust shared memory IPC that you believe are unsafe? It might be instructive to either show why they're actually safe after all or, alternatively, go add the unsafety explanations and work out what a safe wrapper would look like.