> If you can have static types that work for a lot of the Javascript already out there without soundness or no types at all, which one do you pick?
You restrict the allowable subset of JavaScript in your codebase further to one you know is sound. This is pretty much the same strategy that `"use strict"` has, and literally no one complains about that, so it was never really about restricting JavaScript anyway, people just like to dump on soundness as being too un-pragmatic.
Sure, sometimes you really do need to work around the type system and express something you know can't go wrong. But it's always possible to contain it to a small, 'unsafe' part of the codebase! For example, Rust proves this quite nicely.
Soundness can be too restrictive, there are a lot of useful things you can do in TypeScript even for new code bases that are not sound; e.g., you can re-apply an interface to any object even if it is impossible to know if the terms line up. After having used TypeScript for a year, it isn't a big deal, the type errors that might arise during debugging are never very surprising and usually related to logic probelms (I would have had those errors in C# because I would had to used weaker types anyways).
I appreciate soundness, I've used sound languages, but I really dislike overly strict interpretations of soundness that dump on common constructs like null. Ya, null has some bad points, it will cause run-time exceptions, but so do many other things in a programming language. Yes, that level of soundness is way too un-pragmatic *for my needs. I understand the academic problems and I'm not losing any sleep over it.
> Sure, sometimes you really do need to work around the type system and express something you know can't go wrong. But it's always possible to contain it to a small, 'unsafe' part of the codebase! For example, Rust proves this quite nicely.
You can't isolate unsafe operations from only having isolated effects to the block of code flagged.
> You can't isolate unsafe operations from only having isolated effects to the block of code flagged.
That's because the module is the isolation boundary in Rust, generally. So for maximum safety, getting the unsafe code into a smaller module and then re-exporting only its safe interface is the way to go.
Can't unsafe rust code convert integers to pointers and thus write to arbitrary memory locations within the address space of the process? The module boundary won't help with soundness, strictly speaking. If any unsafe code executes ever, then in principle it can (e.g.) zero out a pointer in some safe code which is guaranteed to be non-null as far as Rust is concerned.
The point is that if the unsafe code is correct, any assumptions it is making need to be in the module. Ergo, you should only have to scan the module itself to figure out whether the code is unsafe or not, because if you can't prove safeness by looking at the module it's ipso facto not safe.
Ok, but then that is true in any language that allows unsafe code. You can also put raw pointer manipulation in a module in C++ code.
I do think the significance of Rust's unsafe keyword is exaggerated sometimes. It is just a bit of linting automatically provided by the compiler. Block scoping and tagging of unsafe code doesn't in itself add any guarantees. It's only barely more useful than a simple comment saying "beware: unsafe code here".
The point isn't to tag the unsafe code. The point is that code NOT bearing the tag is guaranteed to be memory safe and free of race conditions.
It's like having a Trusted Computing Base which is very small, verifying the hell out that TCB and then relying on its invariants when building up the rest of the system.
C++ cannot do this at present because all code must by definition be treated as untrusted/unsafe. In Rust you simply don't have access to do e.g. pointer arithmetic in safe code and that code is kept separate from unsafe code by the compiler.
>The point is that code NOT bearing the tag is guaranteed to be memory safe and free of race conditions.
No, it isn't, because it may call code that uses unsafe constructs. (That's why you don't have to put an unsafe block around your code every time you push a value into a vector.)
>C++ cannot do this at present because all code must by definition be treated as untrusted/unsafe.
No, only the code that uses unsafe constructs needs to be treated as unsafe. You could easily make a tool that automatically inserted 'unsafe' tags around C++ code that used unsafe constructs such as pointer arithmetic. The fact that this tool is built into the Rust compiler is just a convenience.
> No, it isn't, because it may call code that uses unsafe constructs. (That's why you don't have to put an unsafe block around your code every time you push a value into a vector.)
For the last time: if you have proven the module using the unsafe code correct, you do not have to treat code calling it as unsafe, and safe code calling it is guaranteed to be free of data races and memory safe.
> No, only the code that uses unsafe constructs needs to be treated as unsafe. You could easily make a tool that automatically inserted 'unsafe' tags around C++ code that used unsafe constructs such as pointer arithmetic. The fact that this tool is built into the Rust compiler is just a convenience.
That wouldn't solve anything at all. C++'s type system cannot and does not enforce safe usage of APIs like string_view; it doesn't matter whether the safe code is using "unsafe constructs." If you choose to mark all C++ or C libraries that rely on invariants that can't be proven safe by the type system as unsafe, you will quickly find that essentially your entire program is covered in an unsafe block. The difference in Rust is not that it has unsafe blocks, but that its type system can safely encapsulate interesting APIs whose proof of safety is nontrivial.
> if you have proven the module using the unsafe code correct, you do not have to treat code calling it as unsafe
Yes, I know, but lomnakkus said "code NOT bearing the [unsafe] tag is guaranteed to be memory safe" without adding this qualification. Without the qualification, the statement is incorrect.
>The difference in Rust is not that it has unsafe blocks
Indeed. That’s what I said above.
For some reason, Rust advocates don't like it when people point out that unsafe blocks, in and of themselves, do not add any kind of safety guarantees to the language or limit the effects of unsafe code.
Of course Rust's type system is able to express important invariants that C++'s type system can't, and this can be used to prevent iterator invalidation etc. etc. The point is that if you do have an unsafe block in your own code, you are going to have to go to the trouble of constructing a formal proof -- using tools that haven't been fully developed yet -- in order to be sure that any of the rest of your code is safe. And at present, the proof requires a translation step, which cannot be formally specified, from Rust code to LambdaRust code.
Rust is undoubtably much safer than C++ in practice. I just think the extent to which it provides hard and fast safety guarantees tends to be exaggerated somewhat in its marketing.
Sure. There is a lot of work left to do to bridge the gap between the formal model and Rust code. But we can be reasonably confident at this point that the libraries we've proven correct actually are correct (modulo translation errors), and we specifically set out to prove things about libraries that were safe for interesting or nontrivial reasons. That's a huge step forward. It means that, with some time and effort, producing codebases with formal proofs of memory safety can be done at quite low verification cost. The executable might need to be generated with something other than LLVM, of course, but using tools like CompCert is possible in these cases; the primary difficulty with it is normally that it only correctly compiles correct C, which is difficult to find in the wild to say the least.
Not only that, but thanks to Rust's type system being able to express separation logic invariants, we may be able to do many functional proofs of correctness more easily; right now, refinement types are mostly limited to systems like Liquid Haskell because it is hard to generate invariants for impure code, but by integrating with Rust's type system such things become possible. There is at least one research group currently investigating this possibility, and it looks quite promising. Such work entirely depends on the fact that safe code that passes the borrow checker must be memory safe and has enough annotations that resource aliasing can be determined statically, freeing programmers from the burden of manually specifying primitives like separating conjunction and magic wand. Invariants would be stated at the function level and verified modularly (on a function by function basis), rather than current state of the art techniques used by model checkers, which usually require whole program verification. This has the chance to put scalable formal verification of functional correctness for many common errors (like integer overflow and out of bounds array accesses) in the hands of programmers who don't have training in formal verification; I find that very exciting!
I'd also add: requiring the keyword unsafe actually is fairly important in practice. Without it, there wouldn't be a way to assert to the compiler when a block of code that used unsafe primitives was safe (I suppose you could use something like "trusted" to mark a block safe instead of "unsafe" to mark it unsafe, but I suspect the ergonomics of that would be fairly untenable). So it's not entirely right to say that unsafe is pointless; it's a necessary component. It's just a trivial one.
I don't deny that Rust's lifetimes make it possible to provide static guarantees that C++ can't.
The issue for me is a common misperception about unsafe blocks that the Rust community seems reluctant to squash. All I did in this (huge) thread was point out that unsafe code in an unsafe block can screw up 'safe' code outside the unsafe block. That is simply a fact about how Rust works.
The response in this case has been to go off on all kinds of tangents about formal verification that have little relevance to everyday Rust coding. Joe Coder's not going to formally verify his implementation of a doubly-linked list that uses a couple of unsafe pointer operations. It's probably best if he's aware that strictly speaking, none of the code that uses that implementation is guaranteed to be safe.
This is fundamentally the same kind of encapsulation that 'modern' C++ uses (unsafe code with hopefully safe wrappers), with -- as you point out -- the significant difference that C++ has a type system which would requires runtime checks to enforce certain guarantees that can be verified at compile time in Rust. (I believe it is possible to write C++ containers that catch invalid iterators at runtime: https://ac.els-cdn.com/S1571066111000764/1-s2.0-S15710661110...)
> Without [unsafe blocks], there wouldn't be a way to assert to the compiler when a block of code that used unsafe primitives was safe
Well, sure, but that's something that could be done by a linting tool without any significant loss of functionality, so far as I can see.
Frankly: Joe Coder isn't implementing a linked list in Rust. I encourage you to take a look at most of the repositories we can find on Github; while many use unsafe (mostly to call out to third party C libraries), very few are doing so to create nontrivial data structures. Obviously, each usage of unsafe has the potential to destroy all memory safety guarantees for the program, and I personally think people reach for it much too frequently. Nonetheless, if something goes wrong, people know where to look; as it turns out, it is actually possible that the unsafe code is correct even when it lacks a formal proof (only two memory safety errors were found during the verification effort, that I'm aware of). One hopes that the likelihood of this code being correct goes up significantly when there is less of it and it is marked for extra scrutiny (I certainly know that my first reaction when I look at a new Rust repository is to search for `unsafe` and see what bugs I can find).
When you talk about formal guarantees, you are inherently talking about formal verification. I think it's pretty disingenuous to claim that such talk is irrelevant here; at least to me, it's highly relevant, and I really do care about being able to formally verify code on top of Rust. However, even if that doesn't happen, Rust is still in pretty good company re: safety; pretty much all industry languages expose a C FFI, which means that theoretically all their safety guarantees are completely out the window if even one C function is called (there are people who are working on formally verified safe linking, but the work is still very early). The C FFI is usually wrapped in something that is (hopefully) a safe wrapper, though not always. This is exacerbated by the fact that most languages' type systems can't actually enforce the sorts of invariants that C code needs (for example, a lot of older C libraries use static data in a non-thread-safe way; most languages can't design APIs that require calls to a library to stick to a single thread). Since most of the code in those languages is written in the safe subset, it's nonetheless pretty easy to find the (broad) culprit for a segfault when something goes wrong, and over time most safe languages develop their own safer alternatives to the unsafe C libraries (where possible). The hope (my hope, anyway) is that over time, more and more Rust can transition to the safe subset (which may require extending the type system), and much of the Rust that can't is formally verified. I don't realistically expect that all Rust will be so verified, but I do hope that (1) if you want to, you'll be able to formally verify realistic, large programs, and (2) even those that don't have few enough bugs that it becomes very difficult for attackers to exploit them.
As far as the link you posted goes: being able to catch iterator invalidation at runtime in the presence of multiple threads incurs a pretty nontrivial performance impact (Java, for example, doesn't consistently detect it in the multithreaded case). I wrote this before I even read the article. In the article, they explicitly did not consider multithreaded programs (which is true of most such articles). Despite that, while some operations were within 10% of the unsafe version, or even faster, others were anywhere from 2 to 7 times slower. Despite the article's claims, I don't see any reason to believe that the slow variants are less common than the fast ones.
And unfortunately, iterator invalidation is not some isolated issue: it's a common example of what Rust can guarantee, but it stems from the much more basic issue that updates to data structures with restrictions on legal byte patterns must be restricted in the presence of aliasing. Updating a C++ variant at all, for instance, is not safe without strong aliasing guarantees, even in the single threaded case. For a substantial amount of C++ code, you'd have to either go the Java route (make every data structure valid after any word-size partial update, so all operations can be weakly atomic, at the cost of losing interesting mutable in-place data structures and not being able to perform some common compiler optimizations), or wrap updates to many kinds of values in mutexes (which would completely destroy performance in most cases). "Just do it at runtime" has severe performance tradeoffs in the domain that C++ is targeting; requiring shared_ptr to be atomic in the presence of threads is already bad enough that many larger projects write their own custom reference counted types (whose correct use can't be verified at compile time). If people were willing to eat it everywhere, I think they would have switched to Objective C or Swift by now, which use clever tricks to reduce the impact of pervasive reference counting. To see some examples of how this would have to work, see how Swift handles things like string slicing (and compare it to the Rust or C++ versions).
The reason I can say all this pretty confidently is because Rust has dynamic safety mechanisms as well. In almost all cases, you can choose between four options: a relatively cheap (but expensive space-wise) single-threaded version, an almost free (but restricted in functionality) single-threaded version, an expensive time-and-space-wise (and relatively general, but still restricted) multi-threaded version, and an extremely restricted multi-threaded version that can't handle interesting updates. People usually opt for a single-threaded version where they use it at all, because they cannot eat the performance and functionality impact of the multithreaded one. If you try to enforce safety with dynamic checks in C++, you will lose to Rust on performance; in fact, you will probably lose to Swift (which is designed from the getgo with these tradeoffs in mind).
In short: even single threaded dynamic validation can have a noticeable performance impact. The bare minimum you need to be competitive with Rust and remain safe is distinct single-threaded and multithreaded dynamic mechanisms, together with the ability to reject code that misuses them at compile time; these mechanisms can only handle aliasing tracking and do not replace the functionality of lifetimes, which can only be relaxed using unique ownership, reference counting, or leaking data. Static thread safety verification would thus need to be restricted to data without references. Given all this, I believe that the only way to make everything work dynamically would be to disallow first-class references entirely in C++. That would break almost all C++ code in the wild, and would make its runtime model comparable to that of Swift. Common tricks to get around reference counting in Rust (like arenas) would necessarily have a large performance impact in this model, and would probably not be used much. Modern or not, C++ doesn't encourage writing code in this way.
> Well, sure, but that's something that could be done by a linting tool without any significant loss of functionality, so far as I can see.
It can't be done by a linting tool, because a linting tool has no idea whether a particular use of unsafe code is well-encapsulated or not. The only way we know how to prove this right now is using formal verification, so we assert safety of such code with manual annotations. I really should have mentioned the alternative as "declare everything that isn't allowed to use unsafe code as `untrusted`" because that's the actual alternative; marking well-encapsulated unsafe code as trusted is actually how things work today. I hope you can see why having the default be "trusted" is not a satisfactory alternative for a language that wants the default to make writing safe code easier than writing unsafe code; you should have to opt into being able to shoot yourself in the foot, and it should be easy to find code that claims to be upholding invariants.
>Frankly: Joe Coder isn't implementing a linked list in Rust
Not so sure about that. I wrote a parse tree transformer in Rust and wanted to write a mutable iterator for my parse tree. That requires unsafe code. So you don't have to be doing anything particularly esoteric to require a small amount of unsafe code. Writing a small amount of unsafe code is not the end of the world, of course. I am not saying that this is a major flaw in the language.
Re catching iterator invalidation at runtime, I should have said explicitly that I was not claiming that this is as good as catching it at compile time.
>It can't be done by a linting tool, because...
I meant that the functionality provided by the 'unsafe' keyword itself could be enforced by a linting tool. (E.g. any block that uses one of the unsafe operations must contain a comment saying 'unsafe code here'.) If this isn't the case, I must have some fundamental misunderstanding about what the unsafe keyword does. The Rust docs suggest that it simply enables a small number of unsafe operations such as pointer arithmetic.
Well, it's pretty trivial, but there are a few subtleties. Keep in mind that unsafe has another meaning when attached to a function type or trait, which interacts with the type system. That can have surprising but necessary consequences (such as not being able to pass an unsafe function as an argument where a closure with the same signature is expected). But yeah, it's not really a first class type and it has deliberately simple semantics, so if you just want to replicate the exact functionality of unsafe in Rust you could probably do it as a lint (though for stylistic and ideological reasons I don't think Rust would ever consider doing so).
Yes, that is correct. In theory, a single incorrect unsafe block can have effects on every other part of the program.
Of course the idea is that you limit the unsafe stuff to small areas of the code so you can really think through those parts to make sure they do what you think they should do. Another thing that's nice about containing the unsafe code to small blocks is that you start each unsafe block from a state of the program with all the guarantees from the surrounding safe code, which makes it easier to recognize what could go wrong in the unsafe code.
I think the idea is that you tell the compiler to assume that this particular chunk of code does no wrong, and compiler shouldn't bother to try to prove it. Of course if you lie, then everything else can break down.
But in general you're incentivized not to lie (and not to trust liars), so while its not provably "safe" to use, we can show everything else is safe as long as the assumption holds.
The type system is sound, where it exists (in "safe" rust), and not sound where it doesn't (in "unsafe" rust).
In the same fashion that nothing is sound if you don't trust your cpu to perform primitive operations correctly. But if you do trust it, then the type system has a chance at being "sound".
>I think the idea is that you tell the compiler to assume that this particular chunk of code does no wrong, and compiler shouldn't bother to try to prove it. Of course if you lie, then everything else can break down.
Right, but the same can be said for C++. If you can trust all of your not-provably-safe C++ code to be safe, then you can trust your whole codebase to be safe.
Ya, the difference is that in your 1MLOC code base you only have to review 10k lines with a microscope instead of every single one, the rest you can more or less trust to not do any funny business (with regards to memory).
Not true at all. All the 'unsafe' keyword does is allow a bunch of code constructs which are not allowed outside unsafe blocks. You can just as well limit your attention to that subset of constructs in your C++ code. You can't do it by grepping for 'unsafe', but you could do it using a C++ parser.
Such a subset would be extremely limited and hardly be called c++ and i still don't think it would be possible, the problem is that these problems occur across files, not just because someone happened to use GOTO in isolation on line 123 in filexyz.cpp.
Object ownership specifically is the biggest problem, you can enforce usage of shared/unique ptr everywhere but there are still instances where you might have passed a reference or raw pointer somewhere and then the last owning pointer goes out of scope in a callback function and the reference will point into whatever. Static allocation can solve a lot of this but then you are very constricted to the types of programs you can write. These are all the problems that the borrow checker avoids.
I've used many static analysis tools and seen many things they can't catch, one example is if you forgot to check if your std::map::find returned end() before using the result? If you do so you are suddenly reading or writing into random memory.
Sure, but that has nothing to do with C++'s absence of an 'unsafe' keyword, or the presence of one in Rust. It has to do with differences in the actual language semantics (and in the unsafe design of the STL and common C++ coding practices).
It has everything to do with unsafe. Because outside unsafe you are forced to use the borrow checker, among other things, preventing you from thrashing memory with 100% guarantee. The number of ways you can thrash memory in c++ can't even by listed, especially not when it comes to object lifetime, so there is no equivalent subset of c++ providing those 100% guarantees. If you do such a mistake just once, the whole execution of the rest of your program might be randomized. Multiply all those possible mistakes with the size of your code base and you see the problem. In Rust, you have all these problems inside unsafe but the amount of code inside unsafe is much much smaller so it is feasible to only let your most senior guys write them and to review them extra thoroughly.
No, it has nothing to do with the presence of the unsafe blocks. Rust would be exactly as safe or unsafe as it currently is if the requirement to wrap unsafe operations in unsafe blocks were removed. The only difference would be that it would be more difficult to grep for code that used those operations.
Yes, one bad unsafe block in rust can screw up even safe code, everybody here understands that. The difference is that the potential problem surface is much smaller and it is obvious where it might be.
And by limiting the attack surface the language becomes safer. Not formally, but in practice.
How small the potential problem is depends purely on how much unsafe code you write, regardless of whether it has a special keyword associated with it.
You can imagine though, that the intended model for rust programming is to avoid "unsafe" as much as necessary. Essentially that C++ is using unsafe by default, while rust has safe by default.
You have to go out of your way not to stay in safe for the most part (if rust's design does what they intended), which implies that most of your code should be provably safe (by rust's definition of safe), as long as the ideally small unsafe blocks are correct, if the program compiles.
Where in C++, there is no such gaurantee regarding compilation; even your "safe" code isn't shown to be safe by the compiler (additional tooling may apply further checks); your whole codebase is assumed to be unsafe.
The "potential problem" is naturally the size of your codebase, whereas rust offers it as a subset of the codebase.
If the unsafe code is written "correctly" then of course, by definition, there isn't anything to worry about. The same is true of unsafe code in C or assembly.
I took a look at the paper that you're obliquely referring to [1], and I think you are exaggerating the significance of its results for practical Rust programming. In fact, the paper underlines how difficult it is to use 'unsafe' correctly. As soon as you use any unsafe code, the language can no longer guarantee that any of your code is safe. Thus, to guarantee safety, you must either never use unsafe code except in libraries which have already been proven safe, or develop proofs for your own unsafe code. There are many instances where it is almost mandatory to use unsafe code in Rust (e.g. writing a mutable iterator for a custom data structure), so this is not a theoretical problem. As far as provable safety is concerned, the situation seems pretty similar to the situation in C++ (for which there is already a vast literature on techniques for formal verification of safety).
That is not to deny that Rust code is likely to be safer than C++ code in practice. However, that has nothing to do with having unsafe operations block-scoped and tagged with the 'unsafe' keyword. I'm not a fan of this syntax because it can easily give newcomers to the language the mistaken impression that the danger is somehow contained within the relevant block.
That link is broken, but if it's the paper I think it is, the key result is that you can extend safe Rust with safe components that do use `unsafe` as long as you are able to prove these components to be sound.
The same is not true of neither C nor assembly, and there are already soundness proofs of most unsafe components in the standard library available, which means that safe Rust + the standard library is sound. With safe Rust + the standard library there are almost no data-structures that you can't write.
I don't understand what this claim is supposed to mean, as there is no standard definition of what "Safe C" consists in. You can prove that some C programs meet some definitions of safety, and there are memory safe subsets of C. What exactly are you saying isn't possible?
The claim that this is fundamentally a kind of verification that you can't do for C or C++ isn't made in the paper. It's not clear to me exactly where this claim is coming from or what precisely it consists in.
> there are already soundness proofs of most unsafe components in the standard library available, which means that safe Rust + the standard library is sound
Only if 'most' means 'all'!
The paper itself notes that "...the problem cannot easily be contained by
blessing a fixed set of standard libraries as primitive and just verifying the soundness of those;
for although it is considered a badge of honor for Rust programmers to avoid the use of unsafe
code entirely, many nevertheless find it necessary to employ a sprinkling of unsafe code in their
developments."
> I don't understand what this claim is supposed to mean, as there is no standard definition of what "Safe C" consists in. You can prove that some C programs meet some definitions of safety, and there are memory safe subsets of C. What exactly are you saying isn't possible?
What isn't possible is encapsulating many of the safe APIs that Rust supports using C's type system. You can restrict all of the C code in a program to a subset that is safe, but it isn't a subset that anyone actually uses; by contrast, it is absolutely feasible to write programs that use safe Rust outside a handful of unsafe functions in the standard library. The primary reason is that Rust's type system is capable of encapsulating many more safe guarantees than C's is.
As someone who works on the RustBelt team, you are misunderstanding the paper. What the paper is arguing is that we need to be able to prove modularity for arbitrary unsafe code, not just treat the standard library as type system primitives, because we want Rust's set of "primitives" to be easily extensible.
> The claim that this is fundamentally a kind of verification that you can't do for C or C++ isn't made in the paper. It's not clear to me exactly where this claim is coming from or what precisely it consists in.
It is fundamentally different. C++ does not have lifetimes or compile time verification of ownership; its references cannot be safely encapsulated behind an API, and it does not have "modes" like share and own. These are all critical for Rust's safety proofs. This is not for lack of trying; many people have attempted to define safe subsets of C or C++, or add these features to C, such as Cyclone. In all cases, new concepts and types needed to be added to the language and most existing code was no longer usable.
The situation isn't remotely similar to C++. After you prove an algorithm correct in C++, you can still generally not guarantee that it is used correctly by the program--the needed preconditions and postconditions are not expressible in its type system, and code that passes the C++ type checker can easily violate invariants elsewhere. In Rust, after you prove code in an unsafe module correct with respect to its type, you can use the module from any safe block and still preserve memory safety and data race freedom.
The important thing that is different about code proved in Rustbelt is that the guarantees can be made local. Once you prove a semantic type for a particular piece of unsafe code, you do not have to know anything about that unsafe code when you prove some other piece of unsafe code correct, as long as they are independent of each other (including unsafe code that is invoked parametrically). This means that the proof effort does not scale with the size of your overall code, or quadratically with the number of unsafe modules, but linearly with the number of unsafe modules. That's absolutely not true in C++; it works entirely because of Rust's precise accounting of ownership.
Finally: even in Rust that uses a lot (relatively speaking) of unsafe code, only a small percentage of the overall code is unsafe. Typical codebases have many hundreds of lines of safe code per line of unsafe code. This means the proof effort is quite reasonable. Automated proofs about C and C++ programs are not trivial and all known techniques do not scale to real code that is more than a few hundred lines, and proof assistant based techniques generally must either prove things about the entire program, or use restricted subsets and annotate them with information very much like what is in Rust's type system. The sel4 microkernel is one of the largest formally verified C programs, and it took 20 man-years of effort and hundreds of thousands of lines to prove, while the actual C code clocks in at under 10,000 lines. While the sel4 proof effort cared about more than memory safety, a significant amount of it was devoted to that. By contrast, I have personally written 10,000+ line Rust codebases with 4 lines of unsafe in them (and even those could be avoided if I were willing to sacrifice a bit of performance); I used only a few standard library functions, and third party crates like Rayon that have undergone a fair amount of verification effort themselves. Memory safety in Rust actually scales to realistic codebase sizes.
The existing result isn't everything; there is still a lot of work to do to make sure it's accurately capturing all of Rust's semantics. But most of the "hard parts" have been done--we are pretty confident that the encapsulation I'm describing actually works. That is far better than the situation in C++.
You restrict the allowable subset of JavaScript in your codebase further to one you know is sound. This is pretty much the same strategy that `"use strict"` has, and literally no one complains about that, so it was never really about restricting JavaScript anyway, people just like to dump on soundness as being too un-pragmatic.
Sure, sometimes you really do need to work around the type system and express something you know can't go wrong. But it's always possible to contain it to a small, 'unsafe' part of the codebase! For example, Rust proves this quite nicely.