All much slower than native containers. My Docker daemon on Linux doesn't take 15 plus seconds to start and dynamically utilizes my CPUs and memory from the host.
I don’t know any of the details (apologies!) but some OpenShift devs I’ve talked to and work with at Red Hat do this for their work. It seems some of the details are here [0].
It is a common joke construct in english to use the one is a X, the other is Y to confound user expectation. (although X and Y can be reversed for purposes of making the joke more funny)
The Joke starts with a question in the format what is the difference between A & B where one of these A is to be the Butt of the joke. The answer then says one is X where X is a number of qualities, probably laudable, obviously describing A, then the punchline is the description the other is Y with all negative qualities in such a way that you realize that the previous description of X which you thought was referring to A was actually referring to B and the negative Y is a particularly mean spirited description of A. One common variant if you have the negative Y values described first you will just say
'Well one is Y (all negative values) and the other one is A (just repeating the name)'
The comment was written in such a way that it functioned like one of those jokes. I unfortunately cannot think of one of these jokes right now, but for an example of how the reversed version works you might have something like this:
What is the difference between X and Ted Bundy?
Well one is an insane Republican murdering abuser of women, and the other is Ted Bundy.
No particular person was thought of as being represented by X when this example joke was formulated.
The grandparent's joke misses the mark, though. It should have been "singularity is now used for science too!" to make it work. As it stands, he just piled on to the expected list of benefits of singularity without the necessary subversion of expectation.
Every time I see things about containers, I can't escape the feeling that they're a tool for developer convenience that will result in nightmares for security, maintenance, and life cycle system administration.
When making this statement, it would be helpful to add the "in comparison to" part. People who can't see the benefit of containers may actually live in a world where they have standardised repeatable build environments without containers (it is possible after all).
For me containers are convenient for the task of specifying the repeatable build environments. It is also useful for caching those build environments and distributing the cached artefacts in a versioned way.
To be honest, if you are working with only 1 or 2 build environments, I don't actually find it particularly more convenient that setting up something without containers. Often I find it even wasteful because you usually select fairly large base images (hundreds of megs) even when you only need a few things. You can build a really streamlined image, but it's a fair amount of work.
However, if I have to coordinate many "microservices" that are talking together via TCP/IP and implemented in several different versions of several different technologies, it's a life saver. Normally I try to avoid this circumstance for reasons I wish were obvious to others. But of course we know that it is not obvious to others and so I appreciate being able to use containers ;-)
Perhaps... a sandboxed webmail[0] app capable of sending and receiving encrypted (or cryptographically signed) emails without being vulnerable to client-side attacks stealing your private key(s) or the cleartext?
There may be similar use-cases for tamper-proof (or at least tamper-evident) browser-based games.
[0] I'm specifying a webmail client in the browser rather than a desktop email client.
You mean like DLL hell, and the Linux distro fragmentation mess? History has shown that the trade-offs of dynamic linking are often worse. Case in point: the reason containers are so popular in the first place.
DLL hell is mostly an issue of incompatible CRTs (aka libc in Unix parlance) on Windows. Historically (until circa 2015) Windows programs (particularly those compiled with Visual Studio) often loaded a multiplicity of incompatible CRTs. It was often the case that malloc'd memory from one DLL could not be free'd from another DLL because each linked to a different CRT. Compounding matters, applications would often globally install popular DLLs which, even if compiled from the exact same source code, would break apps if linked against a different CRT version at build time. Which would invariably be the case as Visual Studio bydefault links in its own version-specific CRT rather than the system CRT, the Linux equivalent of GCC shipping and statically linking in a new version of musl libc, making mixing-and-matching of shared libraries compiled in different environments a risky endeavor indeed.
On Unix there's almost always a single, system-wide CRT. Moreover, commercial environments like Solaris as well as Linux/glibc have maintained strong backwards compatibility (mostly a matter of ABI stability) so that compiled programs continue to work correctly with future libc versions. Believe it or not, third-party dependencies aside, compiling a C program on Red Hat and successfully (and correctly) running on Debian is the norm, not the exception, assuming you never on execute on a system with an older version of glibc than during compilation.[1] (Even though musl libc doesn't do symbol versioning, I assume the story is similar as it rigorously sticks to exposing only POSIX interfaces and opaque structures as much as it can.)
Therefore, DLL hell is a much less pronounced issue in Linux land. A typical example involves the loading of incompatible versions of third-party libraries. But most widely used third-party libraries like libz or libxml2 have maintained strong backward compatibility. The only culprit I've repeatedly encountered in this regard is OpenSSL, which until OpenSSL 1.1 never committed to a stable ABI, largely because it relied so heavily on exposed, preprocessor generated structures as opposed to idiomatic opaque structure pointers. But IME this was more an issue on macOS as the norm in Linux is to simply use the system OpenSSL version. (Who here bothers to download, build, and install OpenSSL in their containers rather than using the system version of their Debian- or RedHat-derived container base image? It's common enough at large companies but hardly so common that it was a substantial factor in the adoption of containers.)
The ELF toolchain on modern Linux systems is actually sufficiently advanced that you could theoretically support the loading of multiple versions of a library like OpenSSL (as long as you didn't pass objects between them), such as the SONAME linker tag and the far more powerful but little known ELF symbol versioning. See, e.g., https://www.berrange.com/posts/2011/01/13/versioning-in-the-...
Alas, SONAME is used inconsistently and ELF symbol versioning almost not at all--only symbol versioning users of note AFAIK are glibc and libvirt. It's at this point that executing in containers becomes the path of least resistance.
It's true that installing scripting language packages or building complex C++ programs which have evolved nightmarish dependency trees as impressive as Node.js/npm is a huge motivator for containerization, but that's not the kind of thing that is meant by DLL hell. And historically this issue was sufficiently addressed by packaging systems. It's no coincidence Ubuntu is the most popular container base image--the Ubuntu package repository is multiple times the size of Red Hat's repository, and much more up-to-date. (It's larger even than CentOS' or Fedora's repositories, even when including third-party repositories like EPEL.) In other words, even in the era of containers people still largely rely on the older packaging infrastructure.
[1] Maintaining this "never older than" invariant is an important caveat, but IME I've run into more issues with kernel versions (e.g. unsupported syscalls) than with libc versions. Containers don't solve the kernel version issue.
You say all these words, and yet distributing an application on Windows or MacOS is trivial, but doing the same for Linux is widely regarded as a giant pain in the ass for anyone who isn't just scattering their source into the wind for some maintainer to deal with (often multiple maintainers).
As someone who has principally programmed for Linux and Unix, the reverse is true for me. Building and packaging for Linux and traditional Unix environments feels very straight forward and transparent to me, while building and packaging for Windows and (to a lesser extent) macOS seems like a dark art. And I think that's largely because "properly" built and packaged applications (particularly GUI applications) for those systems should use Visual Studio or XCode, which require learning and applying proprietary and restrictive (to me) build and link processes. And those processes are geared toward building and packaging dependencies together, whether linking dynamically or statically[1], which in the Unix universe was an anti-pattern to be avoided as much as possible.
More to the point, the phrase DLL Hell was literally coined by and for Windows developers to describe Windows-specific headaches. This much is a fact. A minority of people in the Unix universe use the term DLL, certainly in the 1990s when DLL Hell became a meme; the term shared library was and remains more common. The usage of DLL Hell in the general sense of exclaiming linking and packaging in a particular environment to be too complex and brittle is quite recent and uncommon, though increasing.
I did say alot of words, though :) It was late....
[1] Notably the Debian/Ubuntu universe rather strictly requires the packaging of statically linkable libraries. With rare exception the foo-dev package should always install a useable libfoo.a and, if supported, foo.a for module-based frameworks that permit static embedding, even if the upstream project only builds a shared library. This makes statically linking trivial without having to bundle your dependencies into your build; and makes it easier to stick to a policy of using system-installed dependencies, minimizing the risk of transitive dependency conflicts, even when statically linking, as the maintainers do the hard work of ensuring version compatibility holistically. Red Hat has a similar policy but in IME more poorly executed. On multiple occasions I've found RPM-packaged static libraries (including those built by Red Hat, such as liblua.a) to have been built with the wrong build flags, causing unnecessary namespace and linking headaches, sometimes forcing me to bundle the dependency directly into my build or creating a bespoke RPM package. More generally I've found the RPM universe to be much more inconsistent and problematic, and instead of fixing these technical and functional issues Red Hat expends most of their effort attempting to bypass, rather than complement, the RPM ecosystem. By contrast Debian and Ubuntu package maintainers do a better job of fixing broken upstream builds, which makes life easier for everybody. If I had the choice, I'd stick to supporting only Debian-based operating systems precisely for this reason.
In the windows universe though, that sort of bundling is not really necessary for large parts of what would constitute a GUI application in UNIX land, because the OS provides a guaranteed base set of libraries you can use.
I've been working with Zig a lot lately, and writing a GUI Windows app with no support from any MS tool was a simple matter of some DLL calls. I don't have to recompile for every version of Windows because they might have a different version of libfoo, or it might be under a different name, or any of that garbage.
Even if I did have a complicated set of library dependencies on top of the OS, I can just zip them up in a folder with my application and call it a day. In UNIX land, I'm expected to jump through a bunch of packaging hoops and make the user jump through hoops to get my package to avoid conflicts that may or may not exist. Even if I want to distribute my application as a straight AppDir, or with something like AppImage, I have to include a hell of a lot to cover things that any given distribution might screw me on, or make the user jump through hoops getting the right dependencies.
>More to the point, the phrase DLL Hell was literally coined by and for Windows developers to describe Windows-specific headaches
Yes, when everyone tried to copy their DLLs into the system folder and share them all UNIX style. There's a reason they don't really do that any more.
The only reason dynamic libraries are worse (beside speed) is the dependency hell.
Put the dynamic libraries INSIDE the container and you can upgrade them when/if you want, while nothing will be broken during a systemic upgrade where you don't chose to update them.
In an environment like this the only difference between static and dynamically linked libraries is that the dynamically linked ones can be upgraded without needing to recompile. Other benefits- such as not needing to ship as much data- disappear, and the negatives (such as speed) remain.
In this context then there doesn't seem to be much of a benefit towards dynamic linking. Most people are going to rebuild the container completely rather than enter it to upgrade a library, or if it's a distributed program they're just going to grab the latest version- being able to upgrade components individual is not a use case that I see being all that important.
Then I guess we have different opinions, because I would only update the parts of the container that need updating, and I wouldn't touch what isn't broken - just like how I do a apt-get upgrade more often than a apt-get dist-upgrade: because even if it should all work fine in theory, in practice it doesn't.
I am mostly interested by gains in security, gains in maintainability, and gains in simplicity. Dynamic libraries reduce the number of parts that need to be managed, reducing overall complexity.
Also, if I remember correctly, UWP apps in windows are managed a bit like that, with the equivalent of a xdelta (binary diff)
Being able to update just one or a few libraries would be handy in other cases - not just critical exploits.
On top of my head, the first use case I see is old apps, those which have deployed say years ago, where the compilation scripts/recipe/call it whatever you want have been lost and the employees left, and you need to change one piece of the whole for whatever reason.
There's totally a place for static binaries, but it is a very specific one, where you have special constraints (ex: when I update a remote server, I want to have a static busybox)
> I am mostly interested by gains in security, gains in maintainability, and gains in simplicity. Dynamic libraries reduce the number of parts that need to be managed, reducing overall complexity.
I'd argue they do no such thing, the complexity is still there, only now your users can upgrade libraries and potentially break your app in ways you never expected.
Lets take openssl as an example, i've seen upgrades of that break applications easily.
Your app should take any dependency change as a new version or patch upgrade. And if you're releasing a new version of the application, what gain did a library provide? About none.
I've seen it from both static and dynamic sides, I vastly prefer static binaries cons to the cons I've seen from dynamic libraries.
How would this impact the sort of incremental app updates that Chrome (for example) does using Omaha? eg. would dynamic linking result in smaller or simpler deltas compared to static linking?
How would static libssl affect say a binary diff versus dynamic?
Should be in the noise floor for decent binary diff algorithms. I seem to recall chrome uses that to essentially patch things in place. I'd have to look but I highly doubt for say elf binaries that a binary diff patch would be more than dynamic. Ignoring updating the static library portion itself that is.
Well its no different amount of difference you'd have for libssl then obviously. With a binary diff algorithm if libssl.a doesn't change then its just your changes. If you update just libssl, then its the diff of those bits. If both its both, I can't speak to how much that is as I've never done it. Test it and look at the sizes of the diffs. You'd be updating ssl somewhere, be it package manager or not.
A lot of people who use containers are also fans of immutable infrastructure, which may be where some of the disconnect is here. Even before containers got popular there were a lot of shops that had disabled SSH into their machines to discourage the "artisan" server mentality.
I also don't know anyone who roles containers manually by hand- generally speaking they're automated (using things like dockerfiles and provisioning scripts, which are often right in the repository storing the code), making rolling a new container easier than trying to upgrade a single library.
Generally speaking, I am not a big fan of restricting options. Having SSH is very helpful in case of a catastrophe requiring immediate attention.
Let's leave these emergencies aside. I prepare binaries, and deploy them to servers. The deployment contains several "moving parts", so there are many tests to validate deployment and make sure that all parts work well together.
There is also a rolling deployment, to compare the performance of current version N to N-1 and N-2 on multiple axis, and catch weird regressions. When something that could not be caught in tests goes wrong (recently, a cascading issue caused by an increase in latency due to a change in the routing), I have to see what's happening on one of the live instances. I have to tweak things there.
Doing the equivalent of git bisect with binaries running on many servers is not fun.
If a feature or a fix only impacts one shared library, I would LOVE to be able to roll only that to part of the deployment fleet and see if it fixes the issue -- and roll a different mix of library to another small part of the deployment, etc.
Consider I am testing x different version of library X and y different versions of library Y. I could do that just as well with xy=N different static binaries, but if I can do with the master and x+y dynamic libraries, I believe it makes my life easier.
Live A/B testing is not really possible if the difference is a few percent in efficiency - you have to wait to have enough samples and do statistical tests to see if 1900 samples per time unit on average on servers A, B and C is a regression compared to 2000 samples on average on server D, E and F. If I could test combinations of versions, I could compare more easily server A with library X version x to server B with library X version X-1 to server C with library X version x and library Y version y-1, etc.
xy vs x+y seems small, but increase x and y, add many dimension, and you really start to want more "breathing room".
An application consists of its code and dependencies. If you change a dynamic dependency, then you are changing the application, and this merits a new release of the application and a version bump.
Many shops arrived at this pattern (with bash scripts or config management) after trying a simple `apt-get upgrade` for some security patches, only to realize the application behavior changed or broke entirely.
Static linking simply codifies this practice in a tighter construct, as do containers in general.