When the entire OS is written entirely in Rust or Go from the kernel on up, and all applications are also written in the same language.
Oh, and the silicon itself becomes adapted to the paradigms presented by those programming languages, since C was designed to work on the existing silicon. Forcing entirely new hardware designs to meet an evolving and always-changing software paradigm is an expensive proposition in a commodity market and it will take either a lot of central control and will power or a lot of time.
Up to 1972, computing world managed without C, and even afterwards plenty of systems until the early 1990's kept doing quite well without any trace of C code.
C was created in response to then dominating practice to use assembler for all kinds of coding tasks (not just “system programming”). I wouldn’t characterize the situation as “doing quite well.” On the other hand, C didn’t take off on IBM System 370 until much later, due to the availability of PL/I.
Jovial, ESPOL/NEWP, PL/I, PL/S, BLISS and a couple of others did exist and were in active use outside Bell Labs.
Even Multics actual history was only a failure from Bell Labs perspective, as they went on and were even considered more secure in a DoD security assessment.
Even IBM did all their RISC research in PL.8, before deciding to create what would be AIX, as by then it was all about UNIX workstation market.
Had AT&T been allowed to sell UNIX from day one at the same price as competing OSes, I bet C wouldn't be around.
C is effectively a _predictable_ and portable assembler. You can't do that with Rust because everything is catastrophically moved, which makes it harder for humans to predict emitted code, and Go afaik has a runtime.
Rust uses references all over the place and reuses the same memory addresses within stack that once belonged to another object because the compiler can guarantee it, but that makes it much less reasonable to write.
If you can reason with -O0 you can take it that your code will remain correct in later levels.