Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And that feature disparity includes security updates. If a library is updated with a security fix, you'll need to update everything that uses that library to get the fix, rather than just the shared dynamic library.


The tools for updating security fixes should be applicable for the applications just as easily as the libraries. The applications would have to be rebuilt which should be automated. If the library changes anything that causes the build to fail, much better to find that out at that time than to have the failure occur when it dynamically links on end user machines.

There would be inevitable bandwidth costs in updating like this, but that is the trade-off that is Explicitly made by choosing to go with static.


> The tools for updating security fixes should be applicable for the applications just as easily as the libraries...

I don't think anybody would disagree, but you can't dismiss out-of-hand this required effort. The point is there are pros and cons. Its arguable that one really ought to have a build-server to mitigate the effect/work. For an OS/distribution, this would be a repository of binaries that are maintained, and you could do an (eg) apt-get update and have the proper software fixed (for your "enterprise" or similar software, a similar in-house mechanism) -- if everything is static, the act of replacing the binaries on the end-machine ought to be relatively simple for binary replacement, with the effort for library maintenance moved to maintaining an "out of band" record of what libs ea. app is using, so that when you have a flaw in libxyz that client-a, client-b, and client-c are using, you _know_ you need to update the source for client-[a-c] one way or another -- it boils down to a case of responsibility -- are you going to build safeguards into the link/run mechanism (dynamic libs) and have it adopt a certain amount of responsibility or move the cost upfront to build/maintenance and manage the responsibility yourself (with some other appropriate tooling)...


> There would be inevitable bandwidth costs in updating like this

I don't think that's true. You could transfer only binary differences with bsdiff or something and if there are a lot of them with the same security update - you could go even farther and establish a single patch as a base and all the other patches as differences with the base (or other appropriate compression algorithm). Bandwidth should be very tiny.


That's a problem for me - most of my stuff lives on sub-56k radio or satellite links. Thanks for the explanation!!!


There is a tradeoff between bandwidth and local processing in that you may download all updated dependent binaries or just get the new updated file and relink (or recompile) the affected programs locally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: