Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The extra ceremony ("unsafe { foo.get_unchecked(n) {" vs "foo[n]" makes this even simpler to catch in code review

Right, and in said code review, the webp author could have easily said “yup, we want unsafe here because we already checked up front that the buffer shall not exceed k elements”. Sure it’s easier to see that unchecked access is happening, but when the whole point of large sections of the huffman table code in webp is to make this very thing work, it wouldn’t cause any additional scrutiny in a code review. In other words, it would already be super clear to the reviewer that bounds checking is being disabled for performance sake, seeing the word `unsafe` as ceremony isn’t really adding any information here.

It’s possible webp could have been implemented in rust with a naive approach to early bounds checking and relying on the optimizer to elide it, but having looked at the code, with all the buffer sizes they’re passing around, it looks unlikely that equivalent rust code would have been able to auto-optimize it. I don’t think it’s unlikely at all that, in this parallel universe, they would have found the bounds checking to be a decent enough overhead that they would have reached for unchecked access, and would have passed code review the same way the current code did.



For sure, this absolutely could (and will, at some point) happen. But security isn't only about what can happen, but what will probably happen. Each of these things introduces a possibility to catch the bug, not an impossibility that a bug would happen.

> it wouldn’t cause any additional scrutiny in a code review.

I would hope that it would at least cause a "demonstrate that this is actually necessary" review. People do do this! and sometimes, you do have to use unchecked. A famous example of this is litearally huffman coding, by Dropbox, back in 2016. They ended up providing both bounds checked and unchecked options as a flag, because they actually did measure and demonstrate that things were causing performance drops. I am curious if they'd still have the same issues today, given that a lot of time has passed, and also there were some other factors that would cause me to wonder a bit, but haven't followed up on myself. Regardless, this scenario will end up happening for good reasons at some point, the goal is to get them to happen only when they need to, and not before.


I would only accept such PR with profile data and benchmarks proving the point.


The dropbox article others and myself have linked provided such benchmarks, for an 11% speedup. Seem plausible?


As everything in IT, it depends.

That was for Dropbox specific use case, and 11% seems too much in any case.

For example, what improvements has had rustc in the meantime to better optimize bounds checking, it is still the same, it is less, does it actually matter on the acceptance criteria regarding the definition of done on the story.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: