Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really like the idea of static linking.

Then I think about how I'd patch the next inevitable openssl bug.

Then I don't like it as much.



It's a complete fallacy that every program that needs crypto needs to link to crypto libraries. Look at how Plan 9 does it, where everything is statically linked, but it's other processes which do crypto. Replace only one binary, and the crypto is fixed for all binaries.


shared object libraries are exactly that, a special case of an ELF executable. Special, in the sense that they are not directly executable by users on the command line, other than that, there is no difference (as regular ELF executables can be compiled as re-entrant, position independent code just like shared object libraries).


Not exactly the same—shared libs don't have process isolation like an external crypto process would.


Yeah, definitely a po-ta-to distinction.

The real difference between static and dynamic linking is self-contained vs. external dependencies. This dependency can be actual "linking" or it can be inter-process; I don't think that changes the equation.


Interesting fact: while not all shared object files are executable (or rather: do something interesting other than dump core), some most definitely are: try executing libc someday: $ /usr/lib/libc.so.6

See http://stackoverflow.com/questions/1449987/building-a-so-tha... for more information.


Most Linux distributions are not going to have enough hardware to rebuild large fractions of their packages when a vulnerability is found in a popular library in a reasonable time. Also, many users will be unhappy to download gigabytes of updates when this happens instead of few megs. (Not every organization runs an internal mirror and not every user sits on a 1GBit pipe)

This would lead to very slow security updates for the end users.


Chrome makes use of binary patching. Just the deltas are sent. Originally they used bsdiff and ended up implementing their own (superior) version. https://blog.chromium.org/2009/07/smaller-is-faster-and-safe...


I guess the "suckless" answer would be to not use OpenSSL.


Who knows, we might soon get SuckleSSL. :)

Jokes aside, they have a bizarre philosophy and attitude, especially when you consider their software is most of the time buggy, and… Well, sucky.


It's hard to take them seriously when they include statements like this on their FAQ: "Of course Ulrich Drepper thinks that dynamic linking is great, but clearly that’s because of his lack of experience and his delusions of grandeur."


Per http://suckless.org/rocks you appear to have guessed correctly.


OK, then same question about whatever SSL library it does use.


You'd run "pkg-manager upgrade", just like you do with dynamic linking.


Yep but now instead of fetching 1 updated library, you depend on everybody and their cat to rebuild their binaries and publish updated versions.


Not really, I depend on my distro to push updated packages that I will update. And I also hope that my distro pushes me binary diffs so that it's going to be very fast.

The point is: in the context of a Linux distro, it's not true that you need dynamic linking to be able to do security patches effectively. What users do is to run the package manager to update the system; the package manager can provide updates to static binaries as well (and do it efficiently). It's just a matter of tooling; current package managers are designed around the concept of dynamic libraries, but they could be updated.


Is it practical to make diffs of recompiled binaries? Don't you need to compile to position independent code? Or otherwise make sure that most of the code's position does not change when some statically linked library changes?


Slightly different comparison, but I remember some google project to do this for shipping updates a while ago. Must have been for android, but I can't remember.


Chrome, actually. Called Courgette [1]. This would actually be really awesome to apply to statically-linked distro updates.

[1]: https://www.chromium.org/developers/design-documents/softwar...


There is no reason binaries have to be downloaded completely. They can be patched. And we can use rabin fingerprinting for deduplicating to not send duplicate blocks for each binary. Also, don't forget Chrome's approach of patching the disassembly of a binary. https://www.chromium.org/developers/design-documents/softwar...


You would thing a distro like this would be more like gentoo...you recompile stuff as needed (which for openssl means almost everything).


Gentoo is dynamically linked, so you only recompile if there's an ABI break - a major version - not a patch/minor release. And, you only recompile the stuff that directly links to it.

With static linking, you literally need to recompile everything that uses the library in any form, for every single change. So of there's a security fix in openssl and LibreOffice uses openssl, you need to recompile LibreOffice. If QEMU uses libssh2 which uses openssl, you need to recompile QEMU, even though it doesn't use openssl directly. With Gentoo you just recompile openssl and that's it.

And if there a fix to glibc, you need to recompile EVERYTHING because everything would be statically linked to it.


You don't have to recompile everything. If your system keeps a cache of object files, you only have to relink everything, which is quicker.


This is why binary patching exists: http://www.daemonology.net/bsdiff/


You mean "git update":

  * Upgrade/install using git, no package manager needed
How exactly this is going to work, I don't know.


You probably mean `git fetch` or `git pull`. There is no `git update` in the default feature set.


I like static linking, for some thing. Web applications is something I would really like to be just a statically linked binary, simply because it would enable you to chroot the application easily.


Indeed, static linking is very handy for deploying web services. Moreover, the trend with docker and similar technologies is that you deploy a whole new VM image.

That could be considered "static linking", too, because even if it uses shared libraries within the VM image, the image is always replaced as a whole - in those systems you do not replace just a single library within the running image.

If you go even further, you finally reach concepts like MirageOS, where not only the libraries are statically linked into the application, but the whole kernel as well. That way, you have exactly the code you need within your VM, nothing more.


We could use binary patches to reduce download size. SuSE did (does?) it.


This is a win if and only if there is a fast, competent, reliable security team for the distro.

Suckless doesn't currently have that capability, so... it's not a win, yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: