Big-endian seems to be something of Thomasson[1] these days. I wonder if there is any real use case apart from testing code to see if it still runs on old machines.
Mainframe OSes including mainframe Linux are Big Endian. Maybe mainframe Linux could switch but thus far it has not, it seems unlikely the other mainframe OSes would want to. AIX is also Big Endian and seems unlikely to switch and will be supported until at least 2040.
Personally, I find big endian a lot more natural when doing hardware development. I understand there are people who think the opposite. As far as performance it is negligible.. network order is Big, PCIe is little so you are usually eating some conversion on either platform.
I also really don't understand why people thought it was such a hard concept as to largely force Linux (PPC, arm, mips) toward little.
AIX is big endian but: "SUSE's Linux Enterprise Server (SLES) offers SLES 12 on Power in little endian mode only. As such, customers will need to migrate from big to little endian as they upgrade from SLES 11 to SLES 12."
Oh yeah. Given that MIPS supports both, I'd have thought most would be configured for little endian, but there are still currently supported MIPS big endian on openwrt.org
NetBSD's cross build system (http://www.netbsd.org/docs/guide/en/chap-build.html) is such a joy to use for odd systems like this. Lately I have been playing around with a Ubiquiti EdgeRouter-4 (Octeon MIPS64) which is also big endian. Great little platform for home routing and serving under BSD.
Incidentally, this article on Debian tooling reminded me of a CICD experiment I did a couple of years ago.
I worked on a Debian-based product which had a three stage Jenkins -> OBS (Debian packaging) -> OBS (images) CI pipeline that required modernization for various reasons. One option I submitted was to create a Jenkins plugin that exposed a Debian source repository as a multibranch pipeline, as a replacement for the second stage.
The idea was to encode both a Debian source package's version and its binary build dependencies into the SCM revision seen by Jenkins. If either changed, Jenkins would've triggered a build automatically and if the binary package built was pushed back onto the source repository, the tree of dependent packages would be iteratively rebuilt.
I made both a plugin prototype and a Jenkins test instance, using Aptly as a repository manager and a dead-simple Jenkinsfile as build instructions. It was quite an elegant setup that allowed one orchestrator to oversee all the stages of building that product, but it wasn't selected at the end due to too much uncertainty when compared to off-the-shelf solutions. Maybe I should write a blog article about it...
> He recommends testing for non-server use cases, desktops for example. ""I believe it is as stable as any other rolling release"" due to all of the checks and procedures that the distribution has put in place.
AFAIK testing is worse than both unstable and stable regarding timely security fixes. In unstable you get a timely fix from upstream, in stable you get a fix by the security team.
> In unstable you get a timely fix from upstream, in stable you get a fix by the security team.
For anything that is serious enough, you will get a security fix straight away for both unstable and testing (through the testing-security repository).
For things that are not really that important, yes, you will get the fix later.
I just found out about CheckInstall[0] the other day. Very handy for adding packages that may not be in the offical repositories yet. And it hooks into apt. I would call this tool pretty essential to being a happy user of Debian and some part of the "secret sauce", though not mentioned in this article.
On the surface it does seem like a bad idea. However in practice I've found the changes Debian apply often seem to be very well thought through. I like to explore different Linux distros but I always keep coming back to Debian because how they do things just makes sense.
Debian Vim has a sane default which means my Debian .vimrc is only two lines, one to import the Debian defaults and another to set my four-tab settings. Debian Apache has the very useful a2{dis,en}{mod,site} utilities and a standard way of configuring sites in /etc/apache2/sites-available which makes scripting Apache configs very easy to do. GNOME is set up out of the box with sane defaults which means everything just works.
This even extends to how different packages work together. For example Debian Smokeping simply adds an Apache site config which I can then a2enconf and I straight away have Smokeping working with minimal fuss.
Going from Debian to Arch et al is a exercise in figuring out why things doesn't work as it should and finding that it's because the upstream code actually has poor defaults and it was Debian all along that applied their own judgements and came up with better defaults.
Debian also has unattended-updates which is a fantastic tool. And that blends in well with their very conservative packaging approach -- I can enable unattended upgrades with the confidence that Debian won't push through a breaking change. I would never use a similar function in other distros -- especially rolling distros.
No hate to Arch, Fedora, et al. I loved learning the in and outs of Linux playing around with Arch -- I think Arch is a fantastic distro for learning the nuts and bolts of Linux. Fedora was an interesting experience, and I decided after a while it wasn't for me. But for my daily driver, I was very, very happy to migrate back to Debian stable. Debian stable is literally a distro that just works.
> Debian also has unattended-updates which is a fantastic tool
I very much dislike this tool. It gives itself the privilege to potentially restart any part of your system. I've seen it restart machines on GCP because it installed a new kernel. It might be fine for home desktop use, but turn it off elsewhere.
You can edit /etc/apt/apt.conf.d/50unattended-upgrades to make it behave any way you like. I believe it is explicitely suggested when you install unattended-upgrades that you should edit that file to suit your environment.
This is the distro that allowed one of their package maintainers to silence a valgrind warning by commenting-out some code that they didn't know or understand that resulted in generating only ~32,000(?) possible SSH keys
It doesn't invalidate anything and I'll continue to use Debian forever. But let's not pretend there is any "secret sauce". It's all just the same wild west of FOSS like any other distro or big project.
For some reason this all reminded me of a line from the Battlestar Galactica remake:
> "The story of Galactica isn't that people make bad decisions under pressure, it's that those mistakes are the exception."
The story of Debian isn't the mistakes, it's how few mistakes there are. It's an entire operating system that's been release consistently for over 30 years and currently in 78 languages. There are around 122k packages in the official software repository, almost all packaged by the Debian project. In terms of number of possible combinations of installed systems, just in software packages, that's something like 21 googolplex. Or around 445x the number of atoms in the observable universe. Never mind the different architectures and all the different hardware variants they could be installed on.
Even if it wasn't given away _for free_, it would be an impressive achievement.
> And regardless of how you feel about the default the way they went about it broke things for users
For users using Debian unstable and even there it was apparently fixed/improved after a week.
Debian stable users had no "wild ride" and for the future became the choice to select between the full featured keepassxc version and a minimal variant without non-essential network and IPC features.
Currently, the most accessible big-endian platform that I know of is the Raspberry Pi running NetBSD.
https://wiki.netbsd.org/ports/evbarm/