I agree. I think of address space context switching overhead as the performance price we pay for not being able to run all our programs in a single address space, which we could safely do if we knew all the programs were emitted by a trusted compiler that disallows unsafe memory access. Imagine if system calls were just ordinary functions that can be called with no more than the normal function call overhead? What if they could even be inlined?
Obviously there's a lot of little details you'd have to work out. Like how to make such a trusted compiler in the first place, and how to sandbox unsafe code and legacy applications that were compiled by an untrusted compiler.
If this seems like it's far-fetched or too much work, consider what lengths high-performance hardware devices like HPC network interfaces go to avoid system calls at all costs, to the point where applications talk to the hardware directly. Is that really a sustainable practice long term? And how can anyone audit the security of such hardware devices?
This is what Microsoft Research's Singularity OS did, with the language being C#. Their argument was that the MMU's address space isolation was a 30% Unsafe Code Tax so even if C# was slower than C if they could get the slowdown to less than that it was still an overall performance win.
I'm pretty sure the discovery of Meltdown/Spectre and similar speculative execution attacks would completely wreck this model. The fix for those exploits has been to make the isolation barriers even stronger but if you don't have them at all you're wide open. If you had such an OS but then had to split it back into separate address spaces you've now lost the performance gains and just have a slower OS that is harder to develop drivers for.
Yeah, speculation attacks are a big problem. It seems like maybe if you're running a specific compiler you might be able to avoid speculation attacks by not emitting dangerous sequences of instructions, but I don't know what the state of the art is when it comes to Spectre mitigations and whether it's possible to have a compiler that can formally verify that a program is immune to any (known) speculation attacks.
> I think of address space context switching overhead as the performance price we pay for not being able to run all our programs in a single address space, which we could safely do if we knew all the programs were emitted by a trusted compiler that disallows unsafe memory access. Imagine if system calls were just ordinary functions that can be called with no more than the normal function call overhead?
That's pretty much how the Amiga worked, and that level of technology achievement is still unsurpassed today.
You already see this in PaaS and serverless for managed runtimes, I don't care if my Java and .NET code runs on bare metal, micro-kernel, unikernel, or whatever.
Rust will not replace anything because it’s impossible to write code in it. Everything you write is a syntax error that requires an exobrain to figure out.
GP has been posting the same thing in many threads when Rust comes up. Apparently they think it's funny, or they're really bad at writing Rust code and feel like venting.