It's a complete fallacy that every program that needs crypto needs to link to crypto libraries. Look at how Plan 9 does it, where everything is statically linked, but it's other processes which do crypto. Replace only one binary, and the crypto is fixed for all binaries.
shared object libraries are exactly that, a special case of an ELF executable. Special, in the sense that they are not directly executable by users on the command line, other than that, there is no difference (as regular ELF executables can be compiled as re-entrant, position independent code just like shared object libraries).
The real difference between static and dynamic linking is self-contained vs. external dependencies. This dependency can be actual "linking" or it can be inter-process; I don't think that changes the equation.
Interesting fact: while not all shared object files are executable (or rather: do something interesting other than dump core), some most definitely are: try executing libc someday: $ /usr/lib/libc.so.6
Most Linux distributions are not going to have enough hardware to rebuild large fractions of their packages when a vulnerability is found in a popular library in a reasonable time.
Also, many users will be unhappy to download gigabytes of updates when this happens instead of few megs.
(Not every organization runs an internal mirror and not every user sits on a 1GBit pipe)
This would lead to very slow security updates for the end users.
It's hard to take them seriously when they include statements like this on their FAQ: "Of course Ulrich Drepper thinks that dynamic linking is great, but clearly that’s because of his lack of experience and his delusions of grandeur."
Not really, I depend on my distro to push updated packages that I will update. And I also hope that my distro pushes me binary diffs so that it's going to be very fast.
The point is: in the context of a Linux distro, it's not true that you need dynamic linking to be able to do security patches effectively. What users do is to run the package manager to update the system; the package manager can provide updates to static binaries as well (and do it efficiently). It's just a matter of tooling; current package managers are designed around the concept of dynamic libraries, but they could be updated.
Is it practical to make diffs of recompiled binaries? Don't you need to compile to position independent code? Or otherwise make sure that most of the code's position does not change when some statically linked library changes?
Slightly different comparison, but I remember some google project to do this for shipping updates a while ago. Must have been for android, but I can't remember.
There is no reason binaries have to be downloaded completely. They can be patched. And we can use rabin fingerprinting for deduplicating to not send duplicate blocks for each binary. Also, don't forget Chrome's approach of patching the disassembly of a binary.
https://www.chromium.org/developers/design-documents/softwar...
Gentoo is dynamically linked, so you only recompile if there's an ABI break - a major version - not a patch/minor release. And, you only recompile the stuff that directly links to it.
With static linking, you literally need to recompile everything that uses the library in any form, for every single change. So of there's a security fix in openssl and LibreOffice uses openssl, you need to recompile LibreOffice. If QEMU uses libssh2 which uses openssl, you need to recompile QEMU, even though it doesn't use openssl directly. With Gentoo you just recompile openssl and that's it.
And if there a fix to glibc, you need to recompile EVERYTHING because everything would be statically linked to it.
I like static linking, for some thing. Web applications is something I would really like to be just a statically linked binary, simply because it would enable you to chroot the application easily.
Indeed, static linking is very handy for deploying web services. Moreover, the trend with docker and similar technologies is that you deploy a whole new VM image.
That could be considered "static linking", too, because even if it uses shared libraries within the VM image, the image is always replaced as a whole - in those systems you do not replace just a single library within the running image.
If you go even further, you finally reach concepts like MirageOS, where not only the libraries are statically linked into the application, but the whole kernel as well. That way, you have exactly the code you need within your VM, nothing more.
Then I think about how I'd patch the next inevitable openssl bug.
Then I don't like it as much.