Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Upstream updates can add bugs just as easily as bug fixes.


That's an argument for having an effective review and testing process. Change is inevitable and it's better to be good at doing it routinely than putting it off until an emergency.


This is overhead for every update of every library. In theory it's a great idea and expensive idea, so of course nobody does it.

There are two mindsets in coding, this code needs to work right now and this code needs to work in 20 years. Linking code is very likely to break in the second time frame. Public API's are generally unstable, services goes away, and people break things. But, if all you need is a toy demo then feel free.


I'm talking about integration tests, not 100% coverage of someone else's code. If you need e.g. image decoding, you need to be able to update libjpeg, etc. ASAP after a security patch – and that only requires a simple integration test covering known input / output for the subset of features you support. Since it's automated, there's very little difference between multiple small releases and infrequent large ones from this perspective.

As for your second point, I think you're overly focused on the wrong area. Both linked and static code demonstrably have many problems over that time period – if you recompile, you have to maintain an entire toolchain and every dependency over a long period; if you don't, you're almost certainly going to need to deal with changing system APIs, hardware, etc. — linking doesn't do a thing to make a 20-year old Mac app harder to run. In both cases, emulation starts to look quite appealing – IBM has, what, half a century with that approach? – and once you're doing that the linker is a minor bit of historical truvia.


Tests just give you a bug report early, they don't fix the bug.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: