Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"When I buy some random unmaintained car from years back, I don't want to have to fight with the transmission fault mode. Just drive dammit."


The better analogy is: "When I pick up some blueprint for the building from 80s I don't want to have to fight with changed sensibilities of the foreman. Just build the thing dammit."


Software doesn't mechanically wear over time, this is an awful analogy.


Yeah, I admit. A better analogy on my part would have been that you buy a car and it's not street legal anymore, or one that does not work in today's environment anymore. "I don't want to add seat belts, just drive" or "I don't want it to run on unleaded fuel". (Note that I have no idea if there actually were cars that required leaded fuel.)

Anyway, I think even more likely than running into new compiler warnings/errors is running into external dependency issues, which require you to touch the software anyway, and, I'd argue, are often much harder to fix than fixing code that triggers warnings.


If you build old software that relied on a specific compiler treating undefined behavior in a certain way, then it could.


yes, people actively rewrite it in rust instead

More seriously, software certainly wears, not relative to itself but relative to the environment it is run in. It wears in the sense that active maintenance operations are needed to have old software perform as it should: installing old libraries, running it in emulators, etc.


You'd be surprised…


What are the odds that a tar-ball suffering bitrot will decompress cleanly?


What I had in mind was UB changing over time more than tarballs getting corrupted (eg. strict aliasing-related optimizations).

There's practically no chance for a corrupted gzip-compressed tarball to decompress cleanly (even if by chance it inflates without error, there's a CRC32 of the uncompressed payload at the end of the gzip stream).


> What I had in mind was UB changing over time

I don't want to keep posting this[1] in every C discussion, but when it's relevant, it's relevant. TL;DR A particular version of a compiler interpreted undefined behavior in such a way as to elide code that specifically tested for an error case to exit early because the error implied undefined behavior, so the test could never be reached without undefined behavior. If that sounds like circular logic, that's because it is.

The error happened to exist in a fairly specific version of clang shipped by apple, but not in any of the major release versions:

I was not able to reproduce this bug on other platforms, perhaps because the compiler optimization is not applied by other compilers. On Linux, I tested GCC 4.9, 5.4, and 6.3, as well as Clang 3.6, 3.8, and 4.0. None were affected. Nevertheless, the compiler behavior is technically allowed, thus it should be assumed that it can happen on other platforms as well.

The specific compiler version which was observed to be affected is:

$ clang++ --version

Apple LLVM version 8.1.0 (clang-802.0.41)

Target: x86_64-apple-darwin16.5.0

Thread model: posix

InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin

(Note: Despite being Clang-based, Apple's compiler version numbers have no apparent relationship to Clang version numbers.)

1: https://news.ycombinator.com/item?id=14163111




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: