Immutability is an important step in the right direction.
What I'd like to see come up is a distribution that in addition to immutability provides the following:
1. separate install prefix directory for each currently installed package version (except for a minimal immutable core), thus allowing the coexistance of difference versions … and making access control policies practicable without the mess they are on a filesystem where there is a wild and hard to separate mix of stuff from lots of packages and other origins.
2. a modern and secure binding/mapping mechanism between the package install prefixes and what each of those and the user are supposed to see (dependig on profiles that can vary not only between packages but also e.g. between different users, different use cases etc). The "modern and secure" part meaning: not an old symlink based thing but something that uses the modern isolation techniques, cgroup/namespace, bind mounts, and the other mechanisms the kernel provides today.
3. a nice frontend for making the administration of those package-versions/mappings and the setting of access control policies based on them easy, with a sensible set of distribution curated profiles for the mappings, isolation settings and access control policies.
Parts of that already exist, e.g:
1. (the per-package version install prefix) has been spearheaded by Nix/NixOS … (but it's missing 2. and 3.)
2. the secure isolation, cgroups, namespaces, bind mounts etc part of 2. has been spearheaded by Containers… but their simple layering structure doesn't allow to map the more complex depndency graph of a package management system.
nixos missing 2/3 is what's behind my repeated insistence that nix is in sore need of a plan 9-esque host system. perhaps with fuchsia there's hope yet.
cgroups and linux namespaces are great and all but pervasive namespace abstraction really really needs a fluid, low-friction interface for constructing those namespaces and subsequently examining precisely how they're put together. this is why typing ns in plan 9 is a huge eye-opener for a lot of newcomers. suid and friends ensure a perpetual lower bound on red tape and drudgery where traditional unix-likes implement namespaces.
in the same way that flakes address build-time (and hence, nix project) boundaries, i want to easily control runtime boundaries
does a service need to see all of the store? no; just its closure. all of /proc? probably not. i would love to stop caring about secrets in the store
something as heavyweight as a container just to achieve those restrictions almost feels insulting, and something like pledge(2) approaches the problem from the wrong end.
then you take those well-defined boundaries, and you know exactly where you can deploy that service. take advantage of that well-known nix reproducibility, stub out the endpoints, plop it down on your local system, debug it at zero latency, revise it, deploy the fixed version.
Just about every restriction using cgroups/namespaces can be applied in systemd units. Many of the services in NixOS have been hardened through systemd, and there's an ongoing effort to harden all of them.
OS level "closures" are something I think about, and then the idea of composing said container/env closures into larger ones. I don't know if it's stupid or useful.
I really like these ideas. The OS, the interface that I would like to work with, that I have worked with my entire career, the interface that I'm an expert in, begins to recover some territory from the abomination represented by Kubernetes and Docker.
Here's a snippet of my nixos config that enables immutability . It mounts root on a tmpfs which is erased on reboot. tmpfs is kind of a ramfs but will page to swap in most cases. Add additional bind mounts as necessary.
It’s much less plug-and-play than the options in the article… but I’ve had fun with a related idea, which is running Nixos on storage where root is erased on every boot.
My home server is set up this way. At the time, I more or less followed this guide, which explains things nicely:
One of the hurdles with this approach is that _everything_ has to live in configuration.nix. How are you handling secrets like your host keys? I recently got sops-nix working as a prototype, but curious if there are better ways.
Side note: everything living in configuration.nix makes flake-ifying your system a lot easier, which is a win in my book.
There's a nix way to read a secret from a file. I'm not sure if it still ends up in the nix store that way but at least it's not in the config file so your VCS is clean.
Typically with NixOS, you'll have systemd or a wrapper script read the secret from a file into an environment variable for the service that needs the secret. This secret file would be stored in `/var/lib` or similar, outside the Nix store. There isn't really a "Nix" way, just a pattern used in various places in NixOS.
I've cloned this setup for BTRFS on Guix, which took some questionable hacks because they don't have all the hooks I needed throughout the boot process... still meaning to migrate it to ZFS on Root, which will entail yet more hacks. Going for it though c:
Generally, the NixOS feature discovery/troubleshooting/documentation workflow should not involve Google.
It should start with a NixOS Options Search on search.nixos.org, where for most software projects the search will also end when you turn up a one-liner (or close to it) that enables your desired service or sets your desired configuration.
Then if you need to know more, check the NixOS manual and maybe the Nixpkgs manual.
If those things don't (quickly!) answer your question, then it's time to start Googling, checking the wiki, grepping through the Nixpkgs codebase, asking on Discourse or Matrix or IRC, etc.
Just wanted to note that here even though you were joking because that search order is a super simple way to make using NixOS a lot more enjoyable and less painful. Most NixOS users settle on a flow similar to that over time, but if you use that search order from the start, it'll help keep things fun. :)
I'd put it more like: NixOS is 95% wonderful, 5% very painful.
Most of the things you'll want to do are simple to achieve.
But, yeah, if you hit something & you're not sure what to do: there's less support for NixOS than other Linuxes; and the NixOS documentation is quite fragmented. Community resources like the wiki or blog posts may even be outdated.
There is no other distro (maybe besides Arch and it's wiki) that shows you, say, all the 293 (channel 22.11 at 2023-03-31) places where you can mess with DNS in the entire distro.
Well thank the benevolent lord that NixOS is the only OS you have to google stuff for. Good heavens, I could only imagine the horror of googling for a bug in Debian. A relief such a thing is nary even conceived of.
c.f. Nix survey results from last year, which indicates many of the respondents use NixOS as their daily Linux distribution. (e.g. number of responses for just "uses gnome" and "uses plasma" are about half of the number of responses for "use NixOS daily").
https://discourse.nixos.org/t/2022-nix-survey-results/18983
Many of those who do use NixOS as a desktop Linux are enthusiastically happy about it.
Indeed, this user described the combination of nice features & rough edges as "cursed". (It's so good you won't want to use anything else, but it's too rough to recommend generally).
https://blog.wesleyac.com/posts/the-curse-of-nixos
I only recently discovered NixOS, and the first thing I did was install it on my workstation. I would say NixOS has a fantastic foundation/philosophy, with a lot of paper cuts, and relatively poor documentation. That being said, I don't see myself using anything else going forward, and hope to help address some of the pain points.
This comment threw me off hard because of how loaded it is.
1. How does that address the fact that NixOS is immutable?
2. If discussing server side applications was the intention of the author, why didn't they make that the explicit focus of the article? As the article currently stands, that's not the stated topic, but it is a point included in the article about why immutable OSes are cool. That's a noticeably different reality.
3. Even if we look past the above, NixOS can be used for server side deployment and in many cases is favourable to, say, Ansible deployments, so that STILL doesn't explain why it was left off!
There are definitely people who only use it on servers. Using NixOS on servers and Nix + macOS for the desktop/laptop is a somewhat common combo, too.
But NixOS is actually a very popular choice for desktop OS within the Nix community! That doesn't mean it's right for everyone, but it means that quite a lot of people who find Nix advantageous in other contexts also find it enjoyable or even indispensable for desktop usage.
When I still had an HTPC/living room gaming PC, it ran NixOS. It ran Kodi, a bunch of emulators with EmulationStation as the frontend, Steam, and a Plasma desktop (KWin's built-in fullscreen zoom + a good Steam Controller configuration is actually a great interface for full-fledged web browsing on a TV!) and served me very well for years.
It's the source of one of my favorite NixOS stories: one night my aunt was over to watch a movie with me, and I was running a NixOS system upgrade in the background while the movie played. It was a stormy night, and we had a brownout that forced the computer to reboot. Thanks to NixOS' atomic update procedure, the computer just booted right back up in a few seconds, and we put the movie back on! No need to troubleshoot or repair in order to get a working system. (I did later have to nuke some Nix cache which was corrupted by the process, but that affected only some of Nix's own operation and none of the application software or OS components that Nix managed on the system. Super cool!)
NixOS is a general purpose distribution. In the desktop use, it provides better management for the ad-hoc software dump that any personal machine inevitably becomes.
EndlessOS (started in 2017) is a distro based on debian that uses OSTree for an immutable root partition. It’s targeted at non-technical users and particularly suitable for not-always-online environments.
Linux distros designed to run "Live" (formerly off a a CD, but nowadays a USB or) are approximately immutable. A couple good examples are Tails OS and the venerable TinyCore Linux distribution.
I also agree with many other commenters about inclusion of NixOS and GuixSD.
Can someone explain 'immutability' (in this context) more clearly?
The article doesn't really explain anything. The filesystem is read-only (from the perspective of the user) in all linux distributions. How is this different?
I don't know about ChromeOS, but I would definitely consider that Android is not a Linux distribution.
To me, a Linux distribution is an OS where, given that I am very comfortable with another Linux distribution, I will quickly be able to find my way around.
Does Android have services? As root, can I start/stop them? Can I install libraries somewhere and use them somehow? How do I package an app (other than by running "build" in Android-Studio and letting the magic happen)? Could I install my own window manager?
=> I don't know any of that, so it's not a Linux distro.
As a comparison, SteamOS is based on Arch Linux and has a particular setup (with a read-only "core" partition I guess?), but I am pretty sure that if you give me a shell, I will feel like I am in a Linux distro.
Yes, you use `am startservice <service>` and `am stopservice`.
>How do I package an app (other than by running "build" in Android-Studio and letting the magic happen)
If you mean to manually do what the android grable plugin does you can manually call javac to compile your Activity, you can manually call d8 to turn the class into a dex, you can manually call aapt2 to compile your resources, you can manually zip all of it up, you can manually call zip align on that zip, and you can manually call apksigner to sign the apk.
>Could I install my own window manager?
Android's window manager service is built into system_server. You can't really install your own. Someone could edit WindowManagerService.java, share it with you, and you can rebuild system_server to include that.
Interesting, thanks for the answers! Still, to me, this is as close to a Linux distribution as macOS.
But yeah I'd like to learn more about Android. Unfortunately I haven't found documentation that got me started. Maybe it is there, but when searching about Android, I tend to get the Android SDK, not Android-as-an-OS stuff.
If you have any resources to share then, it would be nice :-).
There's no denying the userland's different, there's just this notion where one's called GNU/Linux and the other ART/Linux (Android RunTime) but the thing they share in common is the Linux kernel,
hence the name.
Sure. Yeah I guess it's not very important. But since we're here :-)...
My point was really that if I you say "I work on a Linux distribution", I think about something like Debian/Fedora/Ubuntu/... I could be pedantic and say "oh, is that a GNU/Linux distribution?", to which you could answer "nope, it's Alpine". And then we could debate on exactly what is required to call it "GNU/Linux" and not "something-else/Linux".
If now you showed me your computer running Android, I would honestly be very surprised. Why didn't you say "Android" instead of "a Linux distribution"?
If you tell me "I run Linux", I won't say "oh, do you not run a userland then?" either. Linux is a kernel, but it is also the name commonly given to a group of OSes that are very similar (and that are commonly referred to as "Linux distributions"). Android is not part of that group.
It's not a Linux because consumer devices don't run kernels from kernel.org.
At best it is "based on Linux", or maybe "Linux-compatible".
This is an extremely important distinction, because proprietary kernels and drivers is why you can't install a different Linux distribution on your Samsung or Xiaomi phone.
Yes, but the point was about a "Linux distribution". Linux is a kernel, and a "Linux distribution" is generally understood as Linux + some fairly specific userland (generally GNU).
Of course it is a gradient, but Android goes pretty far away from the common understanding of "Linux distribution".
I don't care about "is Android really a Linux?" either way, but the distinction the parent is making is that Android extensively modifies the kernel (no idea to what extent android modifies the kernel, but this seems to be the parent's central claim) whereas you're arguing that all distros have their own userland (Linux is a complete OS in the sense of "OS == kernel", but presumably you mean "OS == kernel + userland").
Most of these focus on running containers. What I would like is a distribution that focuses on the reliability of upgrades, both configuration and data. To give a concrete example, I have a Debian server running Apache (with auto-certificate renewal), PostgreSQL, and Postfix. When it's time to upgrade to the next release, it's basically the "it should work, but who really knows, depends on your specific setup you may have to tweak a few things" kind of guarantee. What I want is a rigorous guarantee like we have for relational database schema/data migration.
This _is_ Silverblue and other ostree-based repos.
This article is focused on containers because when the root OS is immutable, then mutable developer workflows/etc necessarily happen in containers (distrobox, toolbox, whatever). OSTree is/was commonly described as "git for filesystems". Every update is an entirely new system image which applies a 3-way diff. It's possibly to state in a single command "show me the drift in my config files/data versus the ref I'm running" and/or "show me the diff between my running system and an update I may apply".
I think Silverblue's approach is a good start, but for desktop usage I'd like to see userland "environment" things like DEs get their own layer with a similar treatment as the root OS, so you could e.g. roll back GNOME or KDE independent of the rest of the system if you so desire. This would also make it very difficult to inadvertently put the system in an unusable state through things like dependency conflicts — if your DE fails to start it can simply fall back and start the last known good version.
The lines for what counts as "environment" are fuzzy which might pose a challenge, but the fix for that could be as simple as offering sane defaults and letting the user decide what is/isn't included.
> I'd like to see userland "environment" things like DEs get their own layer with a similar treatment as the root OS, so you could e.g. roll back GNOME or KDE independent of the rest of the system if you so desire. This would also make it very difficult to inadvertently put the system in an unusable state through things like dependency conflicts — if your DE fails to start it can simply fall back and start the last known good version.
Yes! I've been thinking about the same thing as an accessibility feature for disabled users, as well. I recently learned that I'm going blind, and so I've been thinking about how this kind of mechanism could be used to ensure that a system always has, e.g., a working screen reader and fullscreen magnification software. I'd like to add something like this to NixOS, which is my favorite distro and daily driver.
>It's possibly to state in a single command "show me the drift in my config files/data versus the ref I'm running" and/or "show me the diff between my running system and an update I may apply".
I can see how this can be usable for configuration. But I am having a hard time imagining how this would look for something like PostgreSQL's data files.
/var is not immutable, /home is also relocated to /var/home on Fedora Silverblue for this reason (as far as I recall, it's been a while since I've checked up Silverblue)
I have switched to fedora Silverblue because I wanted my system to be as stable as possible between updates. But I think it's true that one has to delve into container technologies to fully use the OS, and that is a bit of an overhead.
These types of distros are currently suitable for users who are either very advanced OR non-technical (who’s needs are completely filled by an app store).
It's what you get with opensuse MicroOS, you upgrade from snapshot to snapshot that are released regularly. I think silverblue works the same but opensuse is easier.
I also use Tumbleweed on personal servers, never had issues.
What do you expect to happen when an upgrade to a daemon involves a policy choice?
For example, let's say that you have Postfix using a smarthost as a relay, and the new version of Postfix requires relays to have a shared secret for authentication with each of their trusted clients. That isn't something that can be handled by an automated config translator.
This sort of thing happens a lot. The best that can be done is putting the changes into a document for you to read, understand, and then make decisions.
> For example, let's say that you have Postfix using a smarthost as a relay, and the new version of Postfix requires relays to have a shared secret for authentication with each of their trusted clients. That isn't something that can be handled by an automated config translator.
This would be roughly equivalent to adding a new column or a table in a relational database with some non-NULL/empty data in it, right? Somehow we deal with this in that case, or at least we are forced to deal with it or the transaction gets aborted rather than what we have currently, which is, "oops, new postfix has been installed, you didn't read the documentation carefully, and none of your servers can send mail anymore".
> This sort of thing happens a lot. The best that can be done is putting the changes into a document for you to read, understand, and then make decisions.
A distro that has a simulate upgrade command, which generates a report on all these incompatibilties, would be one way.
As an example, Apache 2.2 to 2.4 changed configuration syntax such that as far as I know there is no configuration converter that will work for all configurations. So users of all distributions had a hard time.
I've been running Flatcar Linux as my home server OS since it was CoreOS (which was later acquired by redhat and rebased onto ostree -- Flatcar kept the A/B partition setup so it was nice to be able to directly upgrade to it.)
You set a maintenance window for the reboots and then basically you don't need to touch it. I keep all my docker-compose files in github and just have them all set to launch on boot. linuxserver.io has well maintained images for just about everything you'd want to run at home.
Both Fedora CoreOS (ostree based) and openSUSE's MicroOS are the same way, they give you a kernel, systemd, and a container runtime and then the workload is decoupled into containers, it's a nice setup.
I've been looking for something very similar to this for a while. Do you have a writeup for your setup? Also, how does Flatcar's immutability work? TFA mentioned that MicroOS is based on btrfs snapshots, but it didn't give any indication about how Flatcar worked (nor did the Flatcar GitHub readme). Do you know of any comparisons between Flatcar, CoreOS, and MicroOS? Also, any idea which of these work with Raspberry Pis?
Flatcar uses 2 partitions, A and B, you boot into one and then updates update the one that you're not booted into, when you reboot it it boots into the updated one. It's like Android: https://source.android.com/docs/core/ota/ab
I maintain an awesome-list of immutable resources with a collection of talks and presentations from the people making the stuff. They do a better job explaining it than I can in an hn comment: https://github.com/castrojo/awesome-immutable
However the list is currently focused on desktop stuff since this is a fairly common pattern in cloud already, I should probably write it up.
Semi-related, a few of us have started a community around composable OCI fedora images, and one of our images is intended to be used as a home server built on CoreOS with ZFS, cockpit, and all the goodies you'd need. It's still fresh and we're looking for help if anyone's interested: https://github.com/ublue-os/ucore (Disclaimer: I helped start this project)
Whenever I was looking at using CoreOS, I was somewhat disheartened that automatic reboots weren't built in: https://github.com/coreos/rpm-ostree/issues/2831. Has this changed? I know zincati has maintenance window support, which would also be nice to have.
The problem is that docker in the microOS repos is out of date, yes its based on tumbleweed, but still its not comparable to the official docker repo.
Also its hard to find cloud providers offering microOS.
The best way seems to just treat an Ubuntu/Debian system as an immutable one, using unattended-upgrades for everything
I've been doing this since ~2008 or so using Debian Linux. My custom setup is also capable of booting other computers on the network over PXE. It uses AUFS and tmpfs - yes, it needs upgrading to a more modern union filesystem. The neat thing about it is that you can turn off the power to the computer and any changes you make are completely gone (depending on the user account). Thus whatever I do on my computer leaves absolutely no trace whatsoever. I use it to write my temporary personal diaries/notebooks, and I can write whatever I want, and once the power's cut, it's gone, because it was all in RAM only.
A very good adaptation for living in an authoritarian society (a "police state") with draconian laws, where they are actively trying to clamp down on dissent, including suppression of certain political views. Down with Big Brother and the Thought Police!!!
I see this being very useful for containerized use cases. Usually, you have a Docker file that installs and sets everything once at container creation time. After that, you don't want anything to change in the OS. Great for security.
I agree, but at least in many cases, containers aren't the immutability mechanism. E.g., MicroOS seems to rely on btrfs snapshots--all changes are dropped on reboot and the system starts from a fresh factory state. Your application could be containerized or it could be a package pulled from the distro repository and managed by systemd.
Qubes OS should have been mentioned. It's not a Linux distro and it's not immutable, but it allows to create disposable virtual machines from customized templates. For me, it is much easier to manage.
Well, there are distros in the list that are optimized for containers.
If you use Linux distros in Qubes OS, then I would say that it counts as a Linux distro (by default you get Fedora).
And by default you run in VMs, where some parts are persistent (e.g. your home) and some are not. Just like running a container with some persistent storage.
It depends on your use-case whether you like it or not, but I would definitely put Qubes OS in the list.
Qubes OS is a hassle to set up and it's too heavy in my experience. Also I couldn't suspend to RAM or hibernate because my hardware wasn't fully supported.
That's the problem. I don't want to buy new hardware just for running an OS or having the ability to suspend/hibernate, especially so if it's more expensive than what I have.
Well sure, but that's not entirely a fair point. Of course that's a limitation, but that does not mean that it is a bad OS.
If you have a PC right now and want macOS, then you will have to buy new hardware (e.g. a macbook) for running it. You wouldn't say that this is a problem with macOS, though, would you?
I don't really understand what immutable means here. If you install Debian and then never touch anything other than your /home folder (therefore never touching the bootloader, any configuration files in /etc, not running apt for any changes/upgrades, leaving the kernel alone, etc.), isn't that "immutable"?
Is it not possible to have a client / desktop OS where each app runs in its own container by default with its own writeable filesystem... I would have thought that's beneficial from a security perspective as well as making things easier to separate...
> Is it not possible to have a client / desktop OS where each app runs in its own container by default with its own writeable filesystem...
We have several "container as app" solutions around:
* Microsoft Universal Windows Platform;
* Canonical snaps;
* Flatpak;
* containertoolbx/Distrobox (this one is very DIY and what I use);
It's just that (as of now) most apps rely (and are allowed to be deployed in "stores") on very leaky container isolation (like full filesystem access) so might as well not be deployed inside containers in the first place.
I would say "premature obsolescence" (risking to sound pedantic).
To me, "planned obsolescence" says "we engineer the product such that it does not last". Premature obsolescence says "our product is not good enough to last longer".
The tendency is to build cheap crap, so of course it doesn't last. Doesn't mean that the engineers spent resources making sure it would not last.
Something like SELinux would let you make any distro just as immutable as any of those distros in the list, to a finer level of granularity, and without having to use a specialized distro.
People that want an entirely containerized distro probably don't want to maintain a bunch of SELinux rules or deal with the inevitable breakages that occur due to old/poorly designed software that is needed for one reason or another. It's entirely opposite of the goal of making everything a drop-in no-maintenance package.
Sure, and on any distro you can "sudo mount -o remount,ro /" and also have a useless distro, just like using selinux to make the whole thing immutable.
The difference is you won't have to spent 2 years learning a custom policy language.
> also have a useless distro, just like using selinux to make the whole thing immutable.
Absolute nonsense.
> The difference is you won't have to spent 2 years learning a custom policy language.
If you had bothered to spend maybe 2 months bothering to learn this tech that's been around for almost 20 years, you wouldn't have to use specialty distros to accomplish what your distro can already do natively.
I've never been a fan of willful ignorance and fearmongering, and I see a lot of that when it comes to SELinux.
If your goal is a usable immutable distro, selinux is the wrong tool. It just doesn't operate at the layer of creating a linux distro, or declaring what containers are running, or declaring a list of flatpak apps, or whatever.
My point, by comparing selinux to "mount" and calling them equally useless, was not that they are not useful tools in their niches.
My point was that they are useless if your goal is to build a usable immutable distro.
SELinux is a hammer that cannot be used to turn a default debian installation into a usable immutable linux distro. The initial claim of "using SELinux rather than a dedicated distro can work" was nonsense, so of course the thread of replies off this is nonsense. GIGO as they say.
It could certainly be a lot nicer and more straightforward (and there are competing solutions that are), but people vastly exaggerate the difficulty of learning it, and dismiss the advantages that come from having it properly configured.
How? How do you get immutability from SELinux policies (specifically without crafting detailed policies for everything running in the entire userland--at which point you're just begging for a distro, right?)?
Honestly, if you really want something immutable and distro-agnostic, you probably want something like btrfs snapshots; however, converting the root filesystem to btrfs is probably a pain, and there will be some initial configuration to prepare some cloud-init, ignition, etc before you take your initial snapshot and you'll also probably want to configure some "boot from snapshot" type functionality, at which point it would be pretty nice to have all of this packaged up in a distro so each user doesn't have to figure all of that out every time.
SELinux isn't about immutability, it's about program confinement. Running each daemon in its own fs/user/net namespace comes a lot closer to mimicking the value of SELinux than making the OS immutable.
I like immutability, but I don't find it super useful in an OS. For file updates, sure, I like the idea of system-level snapshots. But generally speaking it's the entire system state that needs immutability, not just one application.
In terms of server workloads, I run immutable containers, so doesn't matter what the base OS is. But I want the OS to be something very popular and stable with paid support.
What I'd like to see come up is a distribution that in addition to immutability provides the following:
1. separate install prefix directory for each currently installed package version (except for a minimal immutable core), thus allowing the coexistance of difference versions … and making access control policies practicable without the mess they are on a filesystem where there is a wild and hard to separate mix of stuff from lots of packages and other origins.
2. a modern and secure binding/mapping mechanism between the package install prefixes and what each of those and the user are supposed to see (dependig on profiles that can vary not only between packages but also e.g. between different users, different use cases etc). The "modern and secure" part meaning: not an old symlink based thing but something that uses the modern isolation techniques, cgroup/namespace, bind mounts, and the other mechanisms the kernel provides today.
3. a nice frontend for making the administration of those package-versions/mappings and the setting of access control policies based on them easy, with a sensible set of distribution curated profiles for the mappings, isolation settings and access control policies.
Parts of that already exist, e.g:
1. (the per-package version install prefix) has been spearheaded by Nix/NixOS … (but it's missing 2. and 3.)
2. the secure isolation, cgroups, namespaces, bind mounts etc part of 2. has been spearheaded by Containers… but their simple layering structure doesn't allow to map the more complex depndency graph of a package management system.