Wanted to share that my province, British Columbia, is pretty good about this. My team was hired to build BC’s Digital Marketplace (https://digital.gov.bc.ca/marketplace), which procures teams to build software for government that is licensed under the Apache 2 License!
Congrats to the core team! I'm super happy to see 0.19. We recently went live with a website that was fully written in Elm 0.18 (www.project6.com). I'm excited to upgrade to 0.19 and take advantage of the new browser package. If anyone is interested in learning more about the decision-making process behind choosing Elm for that project, I discussed it during an Elm Town podcast episode (https://elmtown.audio/the-risk-of-elm-dhruv-dang).
Great book to read if you are considering becoming a front-end developer! Instead of just "tolerating JavaScript's quirks", the author taught me how to use the language's unique features to build maintanable applications eloquently.
I can vouch for the immense improvement Nix has made to my software development process. I use NixOS on my desktop and laptop. At the OS-level, it gets a lot of things right: reproducible, immutable system configs; lightweight containers; devops with declarative configs. At a software project level, nix-shell is an indispensible tool. Compilers and interpreters aren't even part of my system-wide config; instead each project has it's own shell.nix file that installs all dependencies I need on the fly without polluting system state or virtualization. Nix is a god-send, and the developers that contribute to it are nothing short of awesome!
The area that needs improvement is the documentation. Once you learn the Nix language, reading the source code is pretty helpful, but it would be nice to make it more approachable. For example, the nixpkgs repo has a bunch of Nix helper functions that are useful to developers when writing their own packages, but these functions' documentation is either buried in a long manual, or non-existent.
By reading your comment and a little bit of the package declarations in some nix packages, it seems like the software package system of nix is very similar to rez.
Cool! Indeed, Rez does something similar on the package management side.
For Nix, this is almost a side-effect, however. Nix's primary task is to describe how to build packages, and the entire OS is just another package that groups all dependencies. You can make a single change to a file everything depends on and it will result in a new OS.
Nix can evaluate the entire system configuration in seconds, and build or download missing binaries in parallel.
As a result you can e.g. have a complete OS based on a different glibc (maybe with some patch you like) installed and running alongside the normal OS without the glibc patch. Packages that do not use glibc are simply shared.
It takes a list of requested packages, does some dependency resolution to arrive at a matching package list, generates environment variable setting/exporting code (in Python, e.g. setting things like PYTHONPATH and MAYA_PLUGIN_PATH), then launches a new shell with that code. So, yes, you can have multiple shells with different packages in use simultaneously. I don't think it operates below the industry software package level, which makes sense, as most industry software is non-free, closed-source binaries. Each package has a `package.py` with its own list of dependencies, and code for things like manual tweaks to env vars.
>"Using Rez you can create standalone environments configured for a given set of packages. However, unlike many other package managers, packages are not installed into these standalone environments. Instead, all package versions are installed into a central repository, and standalone environments reference these existing packages. This means that configured environments are lightweight, and very fast to create, often taking just a few seconds to configure despite containing hundreds of packages."
At different runtimes (e.g: different package.py configurations) yes.
I can have a software called my_application which has a version that uses A and another version that uses B.
That differentiation can be a flag at runtime, a "variance" e.g: gcc-4 vs gcc-3 etc.
How does nix allow you to do that? For example, say I am using library foo and it has the function bar
In version 1
bar definition is
```
def bar(a, b):
return a + b
```
In version 2
bar definition is
```
def bar(a):
return a + 10
```
I don't understand how you can use v1 and v2 of foo at the same runtime? Unless nix does some namespacing based on the lib version invisibly for the enduser but that looks very prone to error...
If you release them as separated libraries it won't complain, for example, shotgun API imports can be by version
```
import shotgun3
import shotgun2
```
so theoretically you can release a rez package whose name is shotgun3 and another one whose name is shotgun2.
That way you would effectively have shotgun2 and shotgun3 in a same runtime environment.
I wasn't really thinking about language-level runtimes, like, I don't want to solve the problem of only having one global scope in python and running into module name collisions. I'm not sure how concerned rez is with shell environments beyond what (eg.) python packages are available in it, but I was wondering about having a shell where a program foo is available that, say, requires a lib compiled with gcc-4, and another program bar that requires the same lib at the same version but compiled with gcc-3.
Nix expends a bunch of complexity into fucking with dynamic linking to make that work transparently most of the time and deploying wrapper scripts to cover other cases, so I was curious if rez has a cleverer solution there.
Essentially, Nix treats every version of a package, or every "instance" of a package compiled with different flags, or different "build inputs", as a completely different dependency. So foo-1.0, foo-2.0, and foo-2.0-debug are all separate things you can depend on, from the POV of the package manager. A huge "symphony" (a friend's term) of symlinks holds it all together.
Sounds like this solves a similar problem to Docker. Can you comment on what the differences are, and the relative strengths and weaknesses of each approach?
So Nix is ultimately a tool for making and sharing reproducible package builds. It has a binary cache, but it's not necessary. Like Ports, packages get built by default.
Docker, on the other hand, is a distribution and execution mechanism. It provides an abstract way to move around a fully assembled, ready-to-go service or application running in isolation.
It's entirely reasonable to use both. You can use Nix to build and manage docker images and make extremely minimalist docker images. You can use Nix knowing that the entire process is perfectly reproducible, and the Docker containerization is only a final integration step.
With this, you sorta get the best of both worlds. You get a reproducible build (and if done right, also a reproducible dev environment via nix-shell) and with Docker you get the ability to build and run a prepped copy with a well-defined interface.
Docker really doesn't provide a way to reproduce a built image from scratch. You sort of have to trust and build on existing images, and most folks making bulk images appeal to external tooling outside docker files to do this.
> Sounds like this solves a similar problem to Docker. Can you comment on what the differences are, and the relative strengths and weaknesses of each approach?
NixOS committer here.
Docker attempts to achieve reproducibility by capturing the entire state of a system in an image file. It also attempts to conserve space by taking a layered approach to images, so when you base your Dockerfile on some base image, your resulting image is the union of the base image's layers and your own changes.
Here's where Docker's approach falls down, and how this could be fixed (and indeed is, by Nix).
Flaw #1:
Building an image from a given Dockerfile is not guaranteed to be reproducible. You can, for example, access a resource over the network in one of your build steps; if the contents of that resource changes between two `docker build`s (e.g. a new version of whatever you're downloading is released, or an attacker substitutes the resource), you'll silently get different resulting images, and very likely will run into "well, it works on my machine" issues.
Solution:
Prohibit any step of your build process from accessing the network, unless you've supplied the expected hash (say, sha256) of any resulting artifacts.
For the projects I work on in my free time, using NixOS on my personal computers, I've never been bitten by nondetermism.
I wish I could say the same about my work projects that use Docker. My team members and I have run into countless issues where our Dockerfiles stop working and then we have to drop everything and play detective so e.g. new hires can get to work, or put fires out in our C.I. env when the cached layers are flushed, etc. So many wasted hours.
Flaw #2:
What happens if you have two Dockerfiles that don't share the same lineage, but you install some of the same packages? You end up with multiples layers on your disk that contain the same contents. That's wasted space.
Solution:
I'll use NixOS as an example, again. In NixOS, you can look at any package and compute the entire dependency graph. It should be noted that this graph includes not only the names of the packages, but also precisely which version of each package was used as a build input. This goes for both build inputs and runtime dependencies.
Note: by "version" I mean not only the version as listed in the release notes, but every detail of how the package was built: which version of Python was used? And then transitively: what version of C was used to compile that Python? etc, etc.
NixOS exploits this by allowing you to share packages with the host machine, and then each NixOS container you spin up has the necessary runtime dependencies bind-mounted into the container's root file system. As a result, NixOS has better deduplication (read: zero). Also, by the same graph traversal mechanism, it's trivial to take any environment, serialize the runtime dependency graph, and send that graph to another machine, back it up somewhere, create a bootable ISO for CD/USB -- whatever you can dream up.
Thanks to Nix's enforced determinism, you can trivially build container environments (and all of their required packages) in parallel across a fleet of build machines. In fact, Nix's superiority at building packages is so strong that people have gone so far as to build Docker images using Nix instead of `docker` (where they can't avoid Docker entirely for whatever reason): https://github.com/NixOS/nixpkgs/blob/2a036ca1a5eafdaed11be1...
I'm keeping things simple here and trying to address the most salient "Docker vs Nix" points, though I could continue talking about other strengths of NixOS outside of the scope of Docker/container tech, if desired.
Docker's union filesystem approach is great in a world where you can't use a better package manager. For everyone else, there are package managers that obviate the need for such hacks, don't waste space, provide determinism at both runtime and build time, etc.
The main difference between the two is Nix's ideas come from functional programming, and Docker's are imperative. The practical outcome of this is that it's easier to keep your system clean over time with Nix than Docker because of the way it's been designed.
Nix isn't only a package manager, it is also a functional programming language intended for system administration. This means that, while a Nix file is comparative to a Dockerfile, it has several key differences:
1. All Nix files are just functions that take a number of arguments and return a system config (like a JSON object, but with some nice functionality). A Dockerfile is a set of commands you run to build a system to a "starting" state. This is imperative (you're telling the computer to "do this", then "do that", etc.), so once it's done, you can mutate state to deviate from what you specified in your Dockerfile. With Nix, while you can technically do this on some systems, it does provide you with the command line tools so you don't break things (e.g. nix-shell, nix-env). Note, on NixOS, other measures are taken to encourage safety.
2. If things go wrong in Nix(OS), the idea is that you can do a fresh install in your system, copy your old Nix file to it, and with one bash command, be back to where you were before things went haywire. In terms of containers, there's nothing new with this because this is exactly what Docker does. However, Nix also has this concept of generations, so every time you use a Nix command to change your system state either declaratively via a Nix file or imperatively in the command-line using nix-env, you can roll back to a previous version of state. This is especially nice with NixOS, because it creates generations for your entire system too (includes hardware config, drivers, kernel etc.), and makes separate GRUB entries for each generation, so if something breaks after you do a system upgrade, you just chose an old GRUB entry to go back to where you were. AFAIK, Docker doesn't offer anything like this, and this is a good example of how these tools' designs can impact their feature-set so dramatically.
3. A neat feature of Docker is composability. You can inherit from other, pre-existing Dockerfiles, and you can deploy multi-container apps with various tools. Composability at a single-container level is very straightforward with Nix. Since every config is just a function, you simply call the function exposed by a different Nix file with the correct arguments, and... voila! Once you've made your desired Nix file, you can run it either using nix-shell or nixos-container. While I'm no expert, I believe they perform better than Docker as they don't use virtualization. For multi-container deployment, there is NixOps. You write some Nix files describing the VMs you want to deploy, and run a bash command to deploy them to various back-ends (AWS, Azure, etc.). Again, the big difference here is that you can incrementally modify these VMs in a safe way using Nix. If you change your deployment config file, Nix will figure out something has changed, and modify the corresponding VMs to achieve the desired state.
Some may believe that Docker and Nix are very similar, and to their credit, they are in certain scenarios. The thing I like about Nix is that it's one language (and architecture) that was designed well. It's minimal, yet makes it possible to do so much in a safe way.
Nix has been around for a while, but I think the community is growing quickly as functional programming continues to take off. I'm excited to see where it goes, and am super grateful I have a tool like this to use while coding.
these functions' documentation is either buried in a long manual
This is a problem with lots of feature-rich software, even with meticulously-documented APIs. What we need is reverse-indexed documentation. That is, an extensive API reference is only useful for someone who already knows what functions are in the API and just needs to remember how to use them. But even the most thorough API reference does nothing to promote discovering new functionality. This is often left to the authors, who then have to go about writing a User's Guide that gradually explains concepts, idioms, etc. in prose.
Thorough User's Guides are rare because they are tough to write, and even tougher to write well. Users don't often have the time to read through potentially hundreds of pages of prose to find what they're looking for. We need a better way to let users search or browse for concepts, and then be given a list of the functions that implement each concept.
That is, addition to documentation like:
size_t strlen(const char * s);
RETURN: Length of string s.
size_t strnlen(const char * s, size_t maxlen);
RETURN: Length of string s, or maxlen (whichever is smaller).
NOTE: Stops reading after maxlen.
char * stpcpy(char * dst, const char * src);
Copy src to dst.
RETURN: pointer to trailing '\0' of dst, or dst[n] if no trailing NUL.
NOTE: Undefined behavior if dst and src overlap.
char * stpncpy(char * dst, const char * src, size_t len);
Copy up to len bytes from src to dst.
RETURN: pointer to trailing '\0' of dst, or dst[n] if no trailing NUL.
NOTE: Undefined behavior if dst and src overlap.
char * strcpy(char * dst, const char * src);
Copy src to dst.
RETURN: dst.
NOTE: Undefined behavior if dst and src overlap.
char * strncpy(char * dst, const char * src, size_t len);
Copy up to len bytes from src to dst
RETURN: dst.
NOTE: Undefined behavior if dst and src overlap.
We also need to be able to "tag" functions. So we might have the following tags that allow us to search for concepts:
One great approach is the Hoogle search engine for Haskell [1]. The idea with that is that you search by type, instead of name. So if you were looking for a function to take a item, and return a list with n copies of that item, you would search for `a -> Int -> [a]`, which would give you back replicate.
Looks like the Nix expression language is untyped, so this wouldn't work directly, but maybe adding a rough type signature in the docstring would get some of those benefits (and it should be a bit better for discover-ability, since you wouldn't need to guess the same tags/concept the author choose).
It's a total hack, of course, but none the less effective or useful for that.
It has a list of methods marked "safe to experiment with", and simply tries them out.
It gets a big boost from being able to evaluate the receiver (the first in the input list) to a concrete object, and then only consider methods on that object's class.
I worked with Visual Smalltalk for years and early on I created my own tool which found all methods whose source-code contained a given string or string with wild-cards. It was surprisingly effective because it easily integrated into the Smalltalk browser-windows in general so rather than printing out the set of methods found it opened a list-browser with all the found methods in it, so I could then easily browse for their senders or methods called in them and so on.
So how did you know what to search for? A simple case was to look up the ready-built application's GUI which usually always contains some strings. So if you wanted to change something in the system which would result in a change to its GUI just search for source-code with some of the words you saw in the GUI.
I've looked into it a bit. Visualworks smalltalk documentation gave me an idea[1].
In Smalltalk everything is an object, that is including numbers, characters etc. The standar method-lookup searches from the innermost class methods to the outermost class methods.
The method finder could simply iteratively go through all applicable methods that return the class requested and match results.
I don't recall reading about it anywhere, but I did once look at the implementation (it's wonderful having a reflective system where a single click gets you source of anything onscreen), and it's very straightforward but also kind of gross.
One neat trick I'd forgotten is that if it doesn't find anything interesting with the inputs in the order written, it permutes them and tries again!
This just looks like you missing things like classes, namespaces etc. in C. Classes, namespaces etc. are a natural way of making API discoverable (among other things). For example if you want to know how to search for a somthing in a string just look at the methods exposed by string.
Then you just pass the buck over to classes and namespaces. You have the same fundamental problem in Ruby and Python, perhaps even worse for lack of typed function signatures.
I think this is a great idea, and something that could hugely improve semi-automated documentation sites, man pages, etc.
I also think that you could get a lot of people to agree that it is a great idea, and STILL have a lot of trouble enforcing it in a project without very rigid code review policies and very good linting rules.
Anybody can run guix publish however and self publish their own non-free substitutes if they wanted, plus there's all the existing non-free substitute servers around, and you can try your luck with guix import from nixpkgs. The community won't help anybody if they ask for help with non-free software though.
I haven't missed any non-free packages since switching to Guix surprisingly, I always seem to find a free alternative though my daily req are pretty basic.
Has lots of strong motivation, like a stricter stance on non-free software, the usage of a "real", more expressive language (Guile) instead of a niche, poorly undocumented DSL (Nix), use of GNU Shepherd instead of SystemD for the init system, among other
> niche NIH rehash
It shares most of the codebase with Nix, so to say it's NIH is missing the point. Nix improvements are shared with Guix, and Guix improvements are shared with Nix
> unless you're already victim to lisp-induced stockholm syndrome
Different lisp offer different syntaxes, features, and use-cases. That you cannot see it makes me think you didn't try too much, and are probably just trolling.
Guix doesn't really share any of the codebase with Nix. Guix uses a modified version of the Nix daemon, but that is a tiny fraction of the total source code. Guix is an alternative implementation of the packaging model pioneered by Nix, plus additional features not found in Nix.
> It shares most of the codebase with Nix, so to say it's NIH is missing the point. Nix improvements are shared with Guix, and Guix improvements are shared with Nix.
In Guix, the only code from Nix is the Nix daemon. There is not really much code going back and forth, mostly just bug fixes.
I’d just like to interject for a moment. What you’re refering to as Guix, is in fact, Nix/Guile, or as I’ve recently taken to calling it, Nix plus Guile.
To me, when i look at new distro, i look for how well the repo covers binary packages, i have no time/resource to build compilers/big projects. Last time i checked nix, there aren't that many packages in repo to be usable on laptop.
But the package manager itself might be a good idea for dev environments, but all the languages i use provide similar facility(not counting system-library dependencies).
I'm confused by this statement. `nixpkgs` contains tens of thousands of packages, the majority of which have binary substitutions via the NixOS Hydra build farm. Usually the only time I build things from source on my NixOS laptop are when I override some package to use a patch of my own design. Hell, when I was using Arch Linux I found myself building things from source _way_ more often than after switching to NixOS.
I don’t know when you were last looking but for as long as I’ve been using nix (since early 2015) nixpkgs has had definitions of the vast majority of commonly used software, and pre-built binaries of most of those. Definitely worth another look. Also note that you don’t have to switch distros as nix can be plugged into almost any Linux environment without conflicting with whatever else is there.
I used Arch Linux for a while (~3 years) before making the jump to NixOS.
Many things I got from AUR packages are binary-cached in stable Nixpkgs, not to mention "compilers" (GCC, clang, and various versions of GHC in my case). I don't know when you last tried it out, but binary availability isn't a problem now if you'd like to give NixOS a spin. The only thing I build from source is polybar[0].
The Zuk Z2 is good for devs. Costs around USD 260 from online Chinese retailers. Great specs and battery life, easily unlockable bootloader, and lots of unofficial cyanogenmod builds available if you search github.
Thanks for the essay, it was a great read. Maybe my understanding is wrong, but I am slightly perplexed by one idea. I really like the link you have created between the designer's and developer's workflow, and I think your essay touches on an ideal. But, from an applied perspective, this may be difficult to attain when creating components that need to visually transition between state within the original element.
Using your video player example, if the video is loading (state 1), and once loaded, it starts playing (state 2), wouldn't a pure functional approach imply the entire <video> DOM element is replaced by a new one in the change from state 1 to state 2? What if I only wanted to animate the loading bar away and fade out the thumbnail when leaving the loading state, while maintaining the original HTML element?
I'm curious to know if you've thought about this, and have any insight, because it's something I hope to understand. Thanks.
React (and others, I'd assume) does this by keeping a virtual DOM and then only applying the DOM manipulation necessary to get existing state to match desired state.
Hey, sorry for the late reply. I have touched on React. But, I am familiar with Ember, Backbone, Vue and Riot. Nonetheless, could you please explain further?