I've argued this stance once before, but I feel it needs reiterating on this topic: I for one am 100% convinced that whatever the (FOSS) software future has in store for the world MUST have strong support for dynamic linking.
If you look at something like `apt-cache show podman | grep ^Built-Using:` (Output: https://paste.debian.net/plain/1225449) on Debian 11, you will see why. Imagine a few of those components shared between tens of packages, and security problems discovered im some. It's got to be any package maintainer's worst nightmare. "Traditional" distros run by volunteers and in participation of small-ish companies will not be able to cope with that kind of need for rebuild churn - and yet they will live amongst us for many years, if not decades, to come.
I've argued the opposite stance numerous times, because shared dependencies are far rarer than they might appear to be while requiring custom packaging for different distros is a burden I don't want to bear as a package author. I want to have one build for all distros, not custom packaging for every one that has a slightly different definition of how software is supposed to be packaged.
In my experience, non-developer users don't care about package managers (they want it to be invisible) but do care about one click install. The only way I've found to satisfy those users is to distribute applications outside the distros' package managers and statically link everything as much as possible.
It saves me and my users time and money. Package management with inverted dependency trees is user and developer hostile, and it's why shit like podman and docker have to exist in the first place.
Creating packages for various distros is not your job as a software developer, and it never was intended to be your job in the first place. At least part of this stance is based on a misunderstanding of how Linux package manager style software distribution works.
You most likely don't have experience creating and maintaining packages for every distro that exists, expecting you to be able to do this would be silly. Typically the users or developers of the countless different distros will be who handles packaging your software though, not yourself!
As a developer you just need to provide the source code, use a standardized build system such as make, cmake, meson, cargo, etc, and a list of libraries that your software depends on. If you do these things, creating a distro specific package for your software will be totally trivial for whoever feels like doing it and contributing to their distro's repo. Most distros have decades worth of tooling accumulated to make packaging software that uses standard build systems absolutely trivial!
Optionally, you can provide a flatpak image (or similar) for users who want to circumvent the system package manager.
Yes, it is my job as a software developer to distribute software to my users. Having distributions pick-up my software and distribute it themselves is a plus, but until my application has thousands of users that won't happen.
And in order to reach that "thousands of users" my users do need to be able to actually use my application (open source or not); this means either I or them has to do all that work...
So yes, one needs to "have experience in creating and maintaining packages for every distribution that exists"; or as I've proposed, just skip all this hassle and provide single binary executables for the platforms (not distributions) that one supports.
I haven't tried submitting any programs I wrote to Linux distributions, so I don't know how easy it is to find people in Debian, Fedora, and possibly OpenSUSE to package an app with no users yet, or to write your own OpenSUSE or Ubuntu PPA or Arch AUR package to distribute your app. Nonetheless I hear horror stories like https://lwn.net/Articles/884301/ saying that new packages have been waiting for up to 11 months to be reviewed. (Right now, https://ftp-master.debian.org/new.html has 55 packages down from 208, and 9 of the 55 packages have been waiting for over a month to be reviewed.)
You seem to have misunderstood what I was saying. I never said its not your job to distribute your software to your users in general, I said it is not your job to distribute via distro specific packages and repos. Even if you wanted to do this you can't since you don't have the necessary permissions to contribute code to most or even any of the repos in question.
I am not against providing a static binary or image (or whatever) by the way, this is probably the best thing that you can do!
I was just trying to clear up the misunderstanding about distro specific packages and who creates and maintains them since most people don't have much experience with the process.
It's also worth nothing that this isn't really a one way or the other deal, you can provide a portable method and people can package your software eventually. I think the two styles work very well with each other.
This is divorced from reality. You're right, I do not have the experience to package it on every distro. I do however get bug reports that "$app does not work on $distro" that I have to fix as the software developer of $app, because "just wait for another software developer that understands $distro to package it for you" is not an acceptable workaround for users.
Even something that seems innocuous like, "oh sorry it's not available for that by default, here's a tarball" is too much friction for user applications.
Flatpak is the closest to a solution we have, but it has its own issues. It's infinitely superior to "just use the standard build tool like $(N different build tools) and list your dependencies, then pretend it's someone else's problem to solve!"
maybe we can also mention that semver is more of a rough guidance than anything else. you dont really know if an X+Y that runs fine will really continue to run with X+Y' unless you tests that cover that workload and you've actually run them.
First, can this not be reduced to an automation and compute problem? A new version of a library with a bug fix or security fix is released, so rebuild a bunch of packages. Where's the issue? Statically linked binaries can be produced from dynamic libraries, so the "rebuild a bunch of packages" can be further reduced to "re-link a bunch of packages". Optimize bandwidth consumption by building locally and validating against a reproduceable build transparency log.
Second, shouldn't the majority of the time rebuilding packages be dedicated to testing each application with the new library? Would you just skip that step if you used dynamic libraries?
If traditional distos don’t scale then perhaps we really should be looking at alternative approaches? We need systems that serve the users ahead of the maintainers.
At the moment, given that the majority of open-source software is hosted on GitHub, perhaps GitHub releases (as in downloads from the releases tab) might be one such possible "alternative software repository". It's only missing an official "installer"; but there are a few unofficial alternatives like `wget` and `curl`. :)
----
I'm not endorsing GitHub, I'm quite neutral about it, however at the moment it does offer a "standardized" experience. So much so, that whenever I see a project announced on HN and the link takes me to a landing page, I immediately look for a GitHub link and switch to that.
Well said. Dynamically linked packages managed by a package manager and created and distributed by maintainers is the Linux Way of doing things for multiple very good reasons.
In fact, the article brilliant describes one of them:
> Modern software distribution: AKA "actually free with no-ads apps store" of the Linux and BSD worlds.
That's it. That's the advantage summed up in a single description that wasn't even intended as a description of the advantages of the current way of doing things!
The article has a list of "cons", but it utterly fails to consider the possibility that this overwhelming advantage of what it calls "modern software distribution" is simply and inescapably tied to the maintainer-oriented approach that currently underlies it!
Attempts to create modes of distribution that are even slightly different from maintainer-distribution (think Google Play) suck ass. For all its faults, F-droid is a vastly better platform because it insists that the software must be built and distributed by an F-droid maintainer. Stepping even further away into the realm of developer built opaque binaries is begging for chaos and misery.
As you say, it's ultimately a security concern [1]. The article claims that a change is necessary because of "increased usage of languages and run-times that don't fit the current build model", "some of these new projects have large numbers ... of dependencies", but these are themselves problems with modern software development. Even putting aside distribution, bloated nodejs dependency trees create security vulnerabilities. The inability to develop software without pinning exact versions of your dependencies (which then need to be manually upgraded by the developer) creates security vulnerabilities and fragility. These are problems, not good reasons for changing our current way of distributing software!
I'm convinced that there are some (like me) willing to die on this particular hill. Come what may, even if half of the developers out there switch to Go and only ship static binaries, we're going to continue working on and using traditional Linux distributions with maintainer controlled software. (To be clear: Go and static builds are warranted in many cases. For example, closed-source rarely updated software like games, and software that is "deployed" rather than installed. But these are not the base case for Linux distributions.)
> Attempts to create modes of distribution that are even slightly different from maintainer-distribution (think Google Play) suck ass.
I'm not sure Google Play is a great comparison here. Package managers like Homebrew and Scoop are probably better ones; when I just want the latest version of a CLI tool, they make that experience much better on macOS and Windows than on Linux (I know Homebrew sorta supports Linux, but it's still early days).
As I see it, part of the drive behind tools like Scoop is to overcome the limitations of the binary-shipping strategy common to Windows developers. They are successful at this, I agree, but only partially successful. They come from the tradition of programs like Ninite, which were explicitly built as ways to make the binary approach suck less than it did before.
I see the success of these programs as essentially stemming from the insertion of user interests in the form of a maintainer-like process. Sure, they're still working with the binaries, but the actual process of installing and managing these binaries is controlled by users, for users: https://github.com/ScoopInstaller/Main/tree/master/bucket
This means that you get moderation and in many cases modification to the behavior of the program. In a freeware environment like Windows that's full of shitware, at the very least you can in many cases strip out the ads. That's absolutely not nothing, but at the end of the day it comes from a group of user-maintainers stepping up and saying to developers that no, you cannot simply do whatever you want on my system with your software. That's ... sort of the whole point of a software distribution, in the Linux world!
When I want the latest version of a CLI tool on Linux, I simply `pacman -S package`. That's it; one command. I don't see how it could be any simpler or better than that, and on top of that I'm getting the benefits of moderation and integration with the rest of my system. Perhaps you are emphasizing latest version here, and hinting that you don't get that on Linux distros? That depends entirely on the distro; a software distribution is (roughly) a collection of user interests. An Arch user wants (and gets) the latest versions of all upstream software. A Debian user does not want this or see constant updating to the latest version as an advantage, so that's not what they get.
> An Arch user wants (and gets) the latest versions of all upstream software.
My understanding was Arch versions often lag behind that of the latest binaries published on GitHub (based on only mucking around with Arch once); however when I checked the AUR just now everything I use was up to date. Cool.
> A Debian user does not want this or see constant updating to the latest version as an advantage, so that's not what they get.
I'm not sure I agree. You can value Debian's stability and also want to install the latest versions of some tools; that's where I'm at, and package managers that just download statically linked binaries work for me nicely.
If you look at something like `apt-cache show podman | grep ^Built-Using:` (Output: https://paste.debian.net/plain/1225449) on Debian 11, you will see why. Imagine a few of those components shared between tens of packages, and security problems discovered im some. It's got to be any package maintainer's worst nightmare. "Traditional" distros run by volunteers and in participation of small-ish companies will not be able to cope with that kind of need for rebuild churn - and yet they will live amongst us for many years, if not decades, to come.