I loved this game growing up, I'm definitely going to give this a try.
As a minor observation, I'm pretty (pleasantly) surprised that the project provides a NixOS package for the application. As somebody who tends to flit between Arch, Fedora, and NixOS, seeing packages for NixOS before Fedora availability is a very surprising signal in terms of Linux distribution popularity.
I've been using nix/nixos a lot lately and will probably end up publishing more in that general area of interest. That and my excessively-overengineered homelab.
It's kind of hard to fully appreciate how powerful polymode is without actually _using_ it. When you use an editor that can mostly-understand the syntax behind markup like fenced markdown code blocks like this:
```ruby
puts "Hi"
```
You may be used to native syntax highlighting take over, but while you're in polymode, moving the cursor into the Ruby portion of the code actually activates Ruby-mode - any code checking (with something like flymake) works as expected, any major-mode specific actions that reformat code like M-q or == (while evil is active) will do what you expect, and polymode isn't just limited to fenced markdown.
Like the linked article demonstrates, polymode is super flexible. I myself wrote my own, small polymode that watches for comments like /* python */ that precede literal string blocks (which look like ''print("Hi")'' and are usually multi-line) in nix code to transform regions into their native modes (it doesn't need to just be python) and it's really great.
That's a great plan! I'll say that self-hosting may be _the_ number one thing I'm most passionate about due to concerns similar to yours (privacy, ownership, and so on). I've self-hosted many of my own services for a very long time and so I have my own experiences to share as well.
I'll say right off the bat that I don't see any red flags with your proposed plan. The following bullet points are primarily meant to offer some additional options or mental nudges to help you brainstorm - like I said, there's nothing abjectly wrong with your architecture, so this list may just offer more ideas:
- I've self-hosted a few email servers (and still do) and I think punting on that (or just doing the backup plan) is probably the right approach - you can DIY it today, but it's a part-time job. If you ever do decide to take ownership of your email, bringing your own domain to Fastmail or Proton Mail has also worked well for me. Today I host one domain on Linode and one on Ramnode. As with most things email, there are tons of nuances with doing it yourself - I had to get both my email servers' public addresses placed on an allowlist with their respective providers.
- I self-host most of my services on my own hardware in my homelab. I eschew the big, expensive, loud, power-hungry hardware in favor of smaller, cheaper, and swappable hardware, and the strategy has worked out really well. I primarily use ODroid hardware (they offer both ARM and x86-64 hardware). You mentioned a floating/non-public address as a constraint, so you could still do this with tailscale/headscale/something similar and gain the benefit of cloaking your services inside a private network (and using some public/free cloud instance as a low-power VPN endpoint). I don't think DigitalOcean/Linode are bad choices, but I very much like owning the hardware layer as well.
- I've been self-hosting before Nextcloud existed and used its progenitor (ownCloud) and developed a harsh distate for the huge, sprawling complexity of the system (it was hungry for resources, broke on upgrades constantly, etc.). That story may be better now, but I've sinced move on to hosting very targeted, smaller services. For example, instead of Nextcloud's file syncing, I run syncthing everywhere, and instead of Nextcloud's calendaring, I run radicale. Nextcloud will probably be fine, but I've been happier with running a smaller collection of services that do one thing well (syncthing in particular is an exceptional piece of software)
I could really ramble on but I'll just include a list of the stuff I host if you have any questions about it. I blog[1] about some of these, too: Transmission, Radarr, Sonarr, Jackett, Vaultwarden, espial, glusterfs, kodi, photoprism, atuin, Jellyfin, Vault, tiny tiny rss, calibre, homeassistant, mpd, apache zeppelin, and minio. Outside my lab hardware I run a few instances of nixos-simple-mailserver, mastodon, and goatcounter (used to run plausible). I also run a remove ZFS server that I mirror snapshots to as my remote backup solution.
> I would love to see a discussion from somebody who really likes Nix on why it isn't ready for prime time yet/just play devil's advocate aloud on why it isn't the greatest thing since sliced bread.
Top reasons in my mind:
1. Error messages. Even with my >1 year of experience using NixOS full-time, I've encountered errors that I simply _cannot_ fix. This is getting better (recent nix releases let you introspect problems more easily).
2. Documentation gaps. Much of the nix docs are actually pretty good now! But sometimes you run into "missing links" that make it really hard.
> What is a real world use case where Nix isn't overkill? I've read toolchains but... nvm (node version manager), rustup. I still Rust on a machine once and I never think about it again.
For me, nix is unbelievably powerful to construct system images for various virtual machine formats. I'm using the nixos-generators[1] project to construct 8 or 9 image formats from one NixOS configuration. Packer and similar tools are the non-nix analog, but nixos-generators requires essentially adding a singular line to support something like Proxmox as opposed to much more work in some other tool.
I'm also using nix to build all our team's software - which varies from Vue.js to Rust to Django - and fold _all_ those development dependencies into a singular nix `devShell`. This means you can clone our repository, use `nix develop .`, and all engineers use the identical verisons of all software, dependencies, and libraries across several toolchains. It's incredibly convenient. (Imagine that you're a .js developer who needs to make a quick edit to a Rust backend route but doesn't sling Rust all day - you don't need to know how to setup rust at all, the `devShell` provides it).
It's a good question, and a very mature/well-engineered Docker dev environment probably gets you near-parity with an equivalent nix setup. That said, my reasons are:
- Although not _all_ of our projects need nix builds in the end, at least a few do, and acquiring their devshells is essentially zero-effort (you just ask nix for the devshell for the package instead of the package output itself)
- As some other commenters have noted, dealing with large container contexts can get hairy/slow. A devshell just tweaks a few environment variables, which is less of a tangle when working on the various projects (I use direnv, so my emacs session hooks into `flake.nix` and finds everything in-path automatically)
- While you could get a bit-for-bit identical dev environments by pushing a built container image to a registry that all devs pull from, I think most people would write a `Dockerfile` once and let folks build it locally before hopping in, which leaves a small (but extant) possibility that some environments may be subtly different (shifting container tags, ad-hoc apt-get commands, etc). A flake.nix coupled with a flake.lock means all devshells are lock-step identical.
I don't know much about Nix, but I'm planning on reading more about it.
Since you mentioned packer/proxmox/nixos-generators - am I understanding correctly that nix could be used instead of packer to generate a vm image/template for proxmox(or whatever hypervisor)? Is it limited to NixOS or could it create a centos image as an example?
I've used a combination of packer+ansible/chef/salt to create images, but it's always felt a little clunky.
> [...] am I understanding correctly that nix could be used instead of packer to generate a vm image/template for proxmox(or whatever hypervisor)?
That's correct, although as you mention later, it's limited to NixOS. It's an unfortunate constraint given how tremendously powerful it is, but the limitation makes sense. nixos-generators is literally a nix function that accepts a NixOS configuration and operates on the nix value in order to construct the VM image, so it wouldn't work quite the same without nix at the core.
Because nix "knows" about the total definition of the system, nixos-generators can do wonderful things like construct an AMI image or VMWare image without talking to any hypervisor APIs or cloud APIs at all - it knows how to construct the system from the "ground up" and you just end up with a big disk image file after the appropriate `nix build ...` command, no AWS keys or running Proxmox API required.
It's a tradeoff, to be sure. But if the use case fits - which for us, it does quite well - you can be outrageously productive. Adding an entirely new image format takes as long as typing a new line in `flake.nix`, and I leverage a common NixOS module between all the image types to invoke a quick and easy local qemu VM when I'd like to experiment locally.
It's become hard to imagine what managing a "normal" system would be like with the features I get by baking it all into a NixOS configuration.
I think the nix community is doing an increasingly-better job at addressing several outstanding issues, and although the OP link resolves one of them - the getting-started experience - the "why" that you highlight here is still a hard one to answer without getting into the technical weeds. I ran an informal Twitter poll a few months ago and this question (the "why" instead of "how") was the most-requested kind of content.
One potential answer to this is, "imagine building and running software with lockfiles for _literally everything_". I'm not just talking about _versions_ of dependencies or shared libraries - which nix does - but also things like:
- Locking the current point in time (nix resets the build sandbox to the unix epoch)
- Locking out network conditions (all build dependencies need to be fetched and therefore expressed as part of the instructions and not left to "at some point during the build")
- Locking out access to any system state (again, builds occur in a sandbox populated only with what you indicate within the build instructions)
That's what's behind the marketing for reproducability and repeatability. If you lift those principles into new and interesting applications, the various other uses for nix fall out of it:
- NixOS takes the principle of those nix builds and applies to it building not just packages, but the system entirely, like the files it places in /etc or the running kernel.
- Projects like devenv[1] or flake devShells in general re-use the portability of a fully-defined nix package to ship hard-to-break executables into share-able devshells so your peers can work with the bit-for-bit same version of Terraform or python (without worrying about what version of /lib/libssl.so they may have)
- Since nix "owns" the _entirety_ of the inputs and outputs to a piece of built software, shaping it into different artifacts becomes trivial. For example, as long as you're able to build a rust project with nix, nix easily lets you kick out a minimal OCI container (without needing to write a `Dockerfile`), or produce a tiny qemu clone of your system (or any system) by feeding your NixOS configuration into a function that produces images instead of configuring your running system.
Hopefully that's helpful, sorry if it isn't - but nix is sort of alien software, and nix people are still learning how to best share its potential with others!
In your specific case - a _channel_ versus a flake _input_ - consider how you're tracking your system configuration. If you have an /etc/nixos/configuration.nix, then your system can be reconstituted _only_ if you have that configuration.nix in addition to the revision that your channel is currently on. Compare this with a system defined in a flake's `nixosConfiguration`, which accepts its version of nixpkgs from the flake's input, so you can rebuild/recreate the system from the flake entirely without needing to piece together other bits of state like the current channel revision/nixpkgs checkout.
Is this more deterministic than pointing Nixpkgs at a specific commit/tarball in the configuration? I have often done this to make reproducible builds in other Nix settings and it has worked well.
It is: using Flakes you're thrusted into a more pure evaluation mode by default, and it creates a (standard) artifact of the revision you're on: the flake.lock.
You can get almost all the benefits Flakes brings without Flakes using alternatives like niv.
> It is: using Flakes you're thrusted into a more pure evaluation mode by default, and it creates a (standard) artifact of the revision you're on: the flake.lock.
Can you name a specific example of the kind of nondeterminism/nonreproducibility I risk by simply pinning Nixpkgs itself (or to be more principled, using Niv)?
Certain environment variables at the time "nix-build" is executed can affect the nix configuration. Nix flakes default to ignoring those environment variables.
Flakes are also the way forward for pinning nixpkgs; it autogenerates a list of revisions used to pin, and allows easy updating. Flakes and niv, to a large degree, are about solving the same problem.
Yes, because it also makes your Nixpkgs config explicit (doesn't read ~/.config/nixpkgs/config.nix unless you source it in-repo) and doesn't rely on env vars (e.g., NIX_PATH) which are not given explicitly in the flake.
But pinning Nixpkgs alone does get you much of the way there (and for many use cases— those where the only referent on it is 'nixpkgs' or 'nixos'— does make NIX_PATH redundant anyway).
> But pinning Nixpkgs alone does get you much of the way there (and for many use cases— those where the only referent on it is 'nixpkgs' or 'nixos'— does make NIX_PATH redundant anyway).
What is missing compared to using flakes? The only `NIX_PATH` components I normally use is `nixpkgs` which hooks into the whole channel system (which is bad), but by settings my `pkgs` variable to a specific commit/tarball, I can ignore `NIX_PATH` entirely, right?
Traditionally you might also use NIX_PATH for secondary sources of packages and modules, like the Emacs overlay, NUR, or the Nix-Darwin repo.
But yeah if you pin all of your package sources and set your Nixpkgs config locally, you get all the benefits aside from the Nix evaluator's 'pure' mode, which gets some evaluation speedups due to caching.
Apologies if this is only tangentially related, but it's a thought I've often had:
When I went through my university degree (which, admittedly, was over a decade ago), the skills and/or training for the day-to-day tools of software engineering - like git - were nearly never taught in a dedicated way in the same approach as concepts like data structures or algorithms. With degree in-hand the average graduate would probably be more comfortable reversing a linked list or estimating the O(n) of an algorithm than performing a git rebase. Yes, the latter is technology-specific rather than a generalized CS principle, but I would have to imagine that some capstone-type work would really spend time properly training a white-collar worker deeply on a standard few industry tools.
I'm not saying that the author is wrong, because git is indeed deviously inscrutable at times. However, after my twelfth or thirteenth failed merge or rebase, I really _dedicated_ some time to figure out what the hell git was doing from the ground up - which wasn't easy or quick - but I haven't felt truly mystified and incandescently angry at git since. It probably isn't fair to expect every Joe Software Engineer to spend that kind of time on each tool they're expected to use (I'm a masochist and enjoy doing it) but it bums me out to run into the occasional software engineer who never had the opportunity to engage in deep learning and/or practice about their closest tools like a nurse with an ultrasound machine or a court reporter with a stenotype. In a more perfect world a well-rounded software engineering education program would give people space for that, but we're left with "squeezing it in-between sprints" or "on weekends" because it's hard to imagine a company granting people ample time for professional development that doesn't transparently inflate OKRs.
Maybe my anecdote is out-of-date now and the situation is better, but I do still feel like most of us have had that "coworker had to blow away their local repo because git became too tangled" experience in recent memory. I also sort of assume that, in a paradoxical kind of way, code bootcamps may do this type of thing better, but I don't have that experience to draw from.
1) thinking university degrees are meant to prepare you for jobs. This is not why higher education institutions like universities came to be centuries ago. Universities are about sharing knowledge and researching. I would even debate 99% of jobs requiring a degree don't really, software development being one of those. Hell, one of the most important and famous chief surgeons in Italy, with various high impact papers. Was found to never have even started a medical degree.
2) Of all the degrees, I would safely say, SE and even CS are among those closer to day-to-day tools at work.
3) Extracting your experience and assuming it to be similar to those in other colleges, years, countries.
As a minor observation, I'm pretty (pleasantly) surprised that the project provides a NixOS package for the application. As somebody who tends to flit between Arch, Fedora, and NixOS, seeing packages for NixOS before Fedora availability is a very surprising signal in terms of Linux distribution popularity.