I think the idea is that you tell the compiler to assume that this particular chunk of code does no wrong, and compiler shouldn't bother to try to prove it. Of course if you lie, then everything else can break down.
But in general you're incentivized not to lie (and not to trust liars), so while its not provably "safe" to use, we can show everything else is safe as long as the assumption holds.
The type system is sound, where it exists (in "safe" rust), and not sound where it doesn't (in "unsafe" rust).
In the same fashion that nothing is sound if you don't trust your cpu to perform primitive operations correctly. But if you do trust it, then the type system has a chance at being "sound".
>I think the idea is that you tell the compiler to assume that this particular chunk of code does no wrong, and compiler shouldn't bother to try to prove it. Of course if you lie, then everything else can break down.
Right, but the same can be said for C++. If you can trust all of your not-provably-safe C++ code to be safe, then you can trust your whole codebase to be safe.
Ya, the difference is that in your 1MLOC code base you only have to review 10k lines with a microscope instead of every single one, the rest you can more or less trust to not do any funny business (with regards to memory).
Not true at all. All the 'unsafe' keyword does is allow a bunch of code constructs which are not allowed outside unsafe blocks. You can just as well limit your attention to that subset of constructs in your C++ code. You can't do it by grepping for 'unsafe', but you could do it using a C++ parser.
Such a subset would be extremely limited and hardly be called c++ and i still don't think it would be possible, the problem is that these problems occur across files, not just because someone happened to use GOTO in isolation on line 123 in filexyz.cpp.
Object ownership specifically is the biggest problem, you can enforce usage of shared/unique ptr everywhere but there are still instances where you might have passed a reference or raw pointer somewhere and then the last owning pointer goes out of scope in a callback function and the reference will point into whatever. Static allocation can solve a lot of this but then you are very constricted to the types of programs you can write. These are all the problems that the borrow checker avoids.
I've used many static analysis tools and seen many things they can't catch, one example is if you forgot to check if your std::map::find returned end() before using the result? If you do so you are suddenly reading or writing into random memory.
Sure, but that has nothing to do with C++'s absence of an 'unsafe' keyword, or the presence of one in Rust. It has to do with differences in the actual language semantics (and in the unsafe design of the STL and common C++ coding practices).
It has everything to do with unsafe. Because outside unsafe you are forced to use the borrow checker, among other things, preventing you from thrashing memory with 100% guarantee. The number of ways you can thrash memory in c++ can't even by listed, especially not when it comes to object lifetime, so there is no equivalent subset of c++ providing those 100% guarantees. If you do such a mistake just once, the whole execution of the rest of your program might be randomized. Multiply all those possible mistakes with the size of your code base and you see the problem. In Rust, you have all these problems inside unsafe but the amount of code inside unsafe is much much smaller so it is feasible to only let your most senior guys write them and to review them extra thoroughly.
No, it has nothing to do with the presence of the unsafe blocks. Rust would be exactly as safe or unsafe as it currently is if the requirement to wrap unsafe operations in unsafe blocks were removed. The only difference would be that it would be more difficult to grep for code that used those operations.
Yes, one bad unsafe block in rust can screw up even safe code, everybody here understands that. The difference is that the potential problem surface is much smaller and it is obvious where it might be.
And by limiting the attack surface the language becomes safer. Not formally, but in practice.
How small the potential problem is depends purely on how much unsafe code you write, regardless of whether it has a special keyword associated with it.
You can imagine though, that the intended model for rust programming is to avoid "unsafe" as much as necessary. Essentially that C++ is using unsafe by default, while rust has safe by default.
You have to go out of your way not to stay in safe for the most part (if rust's design does what they intended), which implies that most of your code should be provably safe (by rust's definition of safe), as long as the ideally small unsafe blocks are correct, if the program compiles.
Where in C++, there is no such gaurantee regarding compilation; even your "safe" code isn't shown to be safe by the compiler (additional tooling may apply further checks); your whole codebase is assumed to be unsafe.
The "potential problem" is naturally the size of your codebase, whereas rust offers it as a subset of the codebase.
But in general you're incentivized not to lie (and not to trust liars), so while its not provably "safe" to use, we can show everything else is safe as long as the assumption holds.
The type system is sound, where it exists (in "safe" rust), and not sound where it doesn't (in "unsafe" rust).
In the same fashion that nothing is sound if you don't trust your cpu to perform primitive operations correctly. But if you do trust it, then the type system has a chance at being "sound".