I'm talking about integration tests, not 100% coverage of someone else's code. If you need e.g. image decoding, you need to be able to update libjpeg, etc. ASAP after a security patch – and that only requires a simple integration test covering known input / output for the subset of features you support. Since it's automated, there's very little difference between multiple small releases and infrequent large ones from this perspective.
As for your second point, I think you're overly focused on the wrong area. Both linked and static code demonstrably have many problems over that time period – if you recompile, you have to maintain an entire toolchain and every dependency over a long period; if you don't, you're almost certainly going to need to deal with changing system APIs, hardware, etc. — linking doesn't do a thing to make a 20-year old Mac app harder to run. In both cases, emulation starts to look quite appealing – IBM has, what, half a century with that approach? – and once you're doing that the linker is a minor bit of historical truvia.
As for your second point, I think you're overly focused on the wrong area. Both linked and static code demonstrably have many problems over that time period – if you recompile, you have to maintain an entire toolchain and every dependency over a long period; if you don't, you're almost certainly going to need to deal with changing system APIs, hardware, etc. — linking doesn't do a thing to make a 20-year old Mac app harder to run. In both cases, emulation starts to look quite appealing – IBM has, what, half a century with that approach? – and once you're doing that the linker is a minor bit of historical truvia.