Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, the code you mark unsafe is probably also the most complicated piece and thus the most likely to have a bug. I can trust decent C devs to write their basic logic safely, just not the hyper-optimized portions.


A big blob of complex unsafe code is the opposite of how Rust devs approach unsafe optimizations.

Rust has a pattern of isolating unsafety into small components behind a safe interface, so that the component can be understood and tested in isolation. For example, if you need some adventurous pointer arithmetic, you write an Iterator for it, rather than do it in the middle of a complex algorithm. This way the complicated logic can be in safe code.

It's sort of like Lego, where you build from safe higher-level blocks, but you can design custom blocks if you need.


Likewise, C code will be organized into separate pieces with a few small super-optimized/complicated parts, and they can fuzz-test the complicated pieces that are most likely to have buffer overflows.

I can't find the actual code causing the libwebp vulnerability, so idk if mixed safe/unsafe Rust code would've been any better here. Maybe what we really need is an "unsafe-jail" block in Rust that uses a child process limited to a piece of shared mem, and you put big pieces in there to avoid overhead. Like, libwebp can screw up all it wants, just don't touch the rest of my app.


The difference is that in C you can't make a "safe" interface, which the compiler will enforce is used properly.

C's type system is not nearly expressive enough for this. You can barely declare a pointer non-null with extensions, but you can't express ownership. You can't force callers of your function to check for error before using the returned value. You can't force correct lifecycle of objects (e.g. Rust can have methods that can be called once, and no more, and then statically forbid further uses of the object. Great for cleanup without double-free.)

C doesn't have ability to turn off Undefined Behavior for a section of code. You can't just forbid dangling pointers.

For a foolproof API the best you can do is use handles and opaque objects, and a ton of run-time defenses. But in Rust that is unnecessary. Use of the API can be guaranteed safe, and a lot of that is guaranteed at compile time, with zero run-time overhead.

For example, a mutable slice in Rust (a buffer) has a guarantee that the pointer is not null, is a valid allocation for the length of the slice, is aligned, points to initialized memory, is not aliased with any other pointer, and will not be freed for as long as you use it. And the compiler enforces that the safe Rust code can't break these guarantees, even if it's awful code written by the most incompetent amateur while drunk. And at run time this is still just a pointer and a length.

In C you don't get this compartmentalization and low- or zero-overhead layering. Instead of unsafe + actually enforced safe, you have unsafe + more unsafe.


Right, but I can trust a decent C developer to use it safely in the simple parts, especially with tooling like valgrind to detect obvious bugs. The only part where I'd say the usual "nobody is perfect" is in the hard parts.


There's 40 years history of trying, and it doesn't work.

These decent C programmers are like True Scotsmen. When top software companies keep getting pwned, even in their most security-sensitive projects, it's because they hire crap programmers.

Even basic boring C can be exploitable. Android was hit by an integer overflow in `malloc(items * size)` (stagefright). Mozilla's NSS had vulnerability due to a wrong buffer size, which fuzzing did not catch (BigSig).


After looking at Stagefright... yes, I've lost faith in the ability to write safe C code.


At that point, I imagine the overhead will motivate that bounds checks may as well be enabled as the alternative.


The only overhead I see in theory is the additional process's kernel resources, which are negligible. Wherever you'd normally write to memory for some unsafe code to mutate, you instead write to the shared memory. Kernel is doing the same virtual mem protection either way, only in this case it opens up access to the child process too.

Am I missing something else, like shared mem being slower? Maybe the inability to share CPU caches?


The fact that you have a whole other process, was my line of thinking. If the scheduling doesn't play nice then latency will suffer. I don't really know in practice though.


Latency is an issue with file or network-based IPC, but shared memory is supposed to be the same speed as non-shared. Apparently the X window system relies on it a lot.

Scheduling, I dunno. Would imagine it's not bad as long as you don't spawn a ton of these.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: