Config can also be made reproducible with VM images, containers, home-built packages, configuration-management tools (ansible, puppet, …) or a mix of all or some of them.
Personally, I think the chase for silver bullets does more harm than good. I'd rather work with multiple configuration management tools that do one thing well, than to deal with new contenders that seem to want to do it all and spend little effort in being interoperable with other needs outside their use case.
As a single example: Guix does not support setting capabilities on binaries in the store; if you want to set CAP_NET_ADMIN on ping, or you want some service to run with CAP_NET_BIND_SERVICE, you're stuck. There is no way to make it happen inside Guix, so you're left with very ugly manual hacks (mount --bind /gnu/store /mnt ; setcap...; umount /mnt). Similarly, neither Nix nor Guix can be used as a deployment tool only, since they do not preserve post-deployment configuration changes (to the point that GuixSD even deletes user accounts if they're not in the system configuration).
What do you mean growing your system? “How do you run more service?” Or “how do you handle more traffic?” Or “how do you handle more complexity of the application?”
All of these can be handled with any or many of VM images, containers, home-built packages and configuration-management tools.
If you're following the latest trend, you just run VM based on an image with pre-installed kubernetes.
Want to run more services? Just deploy more services on your k8s cluster. Want to scale horizontally to handle more traffic? Just boot up more VMs with the same image, and increase the size of your k8s pods. Want to handle multiple more complexity, split your job into micro services, and give responsibility of different namespaces to different teams.
If you're using a late 2000s, early 2010s model, you would just use VM, and boot them when necessary. You have a VM with HAProxy with dynamic backends.
Want to run more services? Just add more to the base image and route between them with a nginx local to your base VM image. Want to handle more traffic, just spin up more VMs with the base image. Want to handle more complexity? Use multiple types of VMs with different base images, depending on the service, and give the responsability to each team for configuring the base image.
I can go on and on for packages and/or configuration-management tools. But I think you get the idea.
> Want to run more services? Just deploy more services on your k8s cluster. Want to scale horizontally to handle more traffic? Just boot up more VMs with the same image, and increase the size of your k8s pods. Want to handle multiple more complexity, split your job into micro services, and give responsibility of different namespaces to different teams.
As someone who strives to keep things simple (as in running the least possible code to achieve something), this part gave me nightmares.
Nothing technically wrong with it, but the "let's just pile up layers" approach works well for maybe 10% of the workload where surface attack and resource efficiency can be traded for elasticity.
But then come the tradeoffs: complexity is multiplicative (this won't suffice, prepare to also manage an ingress, then a service mesh, a secret store...), which becomes a security nightmare, leads to technical debt, requires constant updates and manpower, is almost impossible to properly test and document, has high development costs and resource usage.
To make things worse, this behaves more unpredictably than a traditional 3-tier approach with Compose and Terraform, or nix-deploy.
Adding more complexity to solve complexity turns into a self replicating problem.
Complexity is relative. I'm a ansible + packages kinda-guy, so of course k8s feels bloated. But Nix with "its compile everything and store the results in deep directories with sha256 hashe names, and if two programs depends on two different patch versions of sqlite, just compile and keep both in parallel like npm" feels really bloated to me.
> if two programs depends on two different patch versions of sqlite, just compile and keep both in parallel
This can make sense in the scientific environment where reproducibility as precise as possible (incl. precise speed and memory usage of every specific algorithm) is very desirable. But in IRL this (different apps ignoring the proper "Unix way" of sharing libraries and updating them independently to the apps, depending on specific versions instead) always is very annoying. Every time I have to run an old app that requires old libs non-existent in my distro version I just symlink the names to the new ones and everything works great. Every time I download a Python project with requirements set to specific versions I replace the requirements with "this or newer", update and everything works great. Theoretically I can imagine a situation when an SQLite version upgrade can break somethiong but practically I can't - I bet this will never happen to me and if it does I'll just pay my bet by fixing the problem manually.
Nix/Guix is no silverbullet.