The article clarifies the difference between "programming language safety" and "safety-critical systems":
> Programming language safety refers to a language’s ability to prevent errors or undefined behaviors at compile time or runtime. On the other hand, "safety-critical" refers to a system’s ability to operate without causing accidents or catastrophic failures that will result in harm to people, property or the environment. So, while safety-critical systems rely on languages that emphasize safety and security, such as Rust, programming tools are only one component of the overall strategy.
Safety critical projects use a variety of techniques to ensure that things don’t go wrong, and if they do go wrong anyway, that the root cause can be traced so that it will not happen again.
Most safety critical projects today are written in C and C++, which have very little compile time checks for safety.
That most projects in these industries are in C and C++? What do you think they use instead?
Or that C and C++ include very little compile time safety checks? I could see one arguing with the specific phrasing around C++, and maybe giving it a bit more credit here, but it’s still objectively less than Rust has.
I am referring to the software currently written and deployed in things like cars, airplanes, spacecraft, medical devices, and the like. Stuff that gets certified by processes like ISO 26262 and DO-178C.
Ada is of course used here too, but we're talking about the lack of checks at the language level being a problem. My point is that it is not, with languages that have less checking than Rust being used for these sorts of things routinely.
My point is that if unsafe Rust is the best that Rust can deliver for safety, it leaves a lot to be desired, and while it is better than other some other languages, Rust is just not the panacea it is advertised to be. I am not convinced that it's the best that's theoretically possible.
Okay. That's a different thing than what I'm talking about. This thread started with "How can it be safe with `unsafe`?". That's what I was trying to address in this sub-thread.
Rust has never advertised itself as a panacea.
I agree that Rust is not the best language that can theoretically ever exist. I do disagree that it is possible to have a 100% compiler checked memory safe language. There will always exist some portion of these systems that need to be verified by hand. And that's okay. The goal here, and how other languages could improve upon what Rust does, is to do what Rust did to other languages: expand on what is expressible within the safe subset.
Rust may not, but random Rust programmers rave about it all the time as if it were a panacea.
> expand on what is expressible within the safe subset
I hope that Rust itself keeps going further in this direction. Even with the residual unsafe code, bounds could be implemented to limit the possible damage, and correct it as soon as feasible.
Unsafe is still needed in some cases to access low-level APIs or to talk to the hardware.
It can still be safe because that is limited in scope, and still makes the whole product easier to review.
Pardon me as an outsider, but the primary selling point of Rust is exhaustive automated compile-time safety checking. If this promise is false, then Rust needs more work.
Before Rust, in the early age of C, even Rust would've have been considered impossible. But is it really mathematically impossible? Even if it were, surely some bounds can be placed in the code on the damage that unsafe Rust can cause, but do such bounds exist? Even Zig has features to control the damage.
Most usages of `unsafe` are necessary because of FFI calls into code written in C, into syscalls, into hardware memory-mapped regions, or other things outside of Rust's control. Without this ability, Rust wouldn't be able to call itself a systems language.
There are also some usages of `unsafe` for constructs (like linked lists) that the borrow-checker does not support. Those could theoretically be checked, but as far as I know a practical/pragmatic way of doing that has yet to be invented.
Every programming language has parts like Rust’s unsafe. In managed languages, that code is contained in the runtime, or in FFI.
In a practical sense, in order to not have “unsafe” code, the language specification would have to contain a full formal specification of every aspect that it’s running on. The hardware, the system software you rely on, everything. That isn’t a good idea, so no language does it. And it doesn’t help you when new hardware comes out.
In a more formal sense, you run into things like Rice’s theorem.
> Every programming language has parts like Rust’s unsafe.
This is a pretty weak argument. "Everyone else does it, so it's not wrong." If using this argument, Rust shouldn't even exist.
Rice's theorem seems handwavy in the real-world limitations of its restrictions. For the sake of argument, even Python is more memory-safe than unsafe Rust.
Unsafe is one of the most important things in rust. The point is that you stick all the unsafety into well marked areas that can be more easily checked.
> Programming language safety refers to a language’s ability to prevent errors or undefined behaviors at compile time or runtime. On the other hand, "safety-critical" refers to a system’s ability to operate without causing accidents or catastrophic failures that will result in harm to people, property or the environment. So, while safety-critical systems rely on languages that emphasize safety and security, such as Rust, programming tools are only one component of the overall strategy.