“We produce an extremely expensive hardware and bill millions for consulting and support but don’t want to spend a single dollar to update a firmware, so now, dear OSS developer, it’s your responsibility to keep my solutions functional for the half of your career years”
Firmware upgrade is a risk that has no mitigations for certain kinds of systems. No amount of money will solve this problem. No sum of money will convince a sensible patient to upgrade firmware on their pacemaker or anything like that.
But even for less critical systems -- what's the problem with wanting to pay less? This is like one of the primary economical drivers...
You also for some reason think that software upgrades are some sort of a natural phenomena which others have to adjust to, and it just happens on predictable interval, and if you miss your cycle you have to pay. Which is obviously ridiculous. A result of industry conditioning you to expect this to work a certain way, w/o questioning the reason for it to work this way.
What should drive software upgrades is in the large part the longevity of hardware. The author claims that the longevity of hardware has improved, even though industry didn't particularly invest into it. It's upsetting to have to generate a lot of e-waste just because we (as an industry) set our sights on a particular release schedule designed to maximize profits for those who provide releases and minimize them for those who consume them.
It’s not about pacemakers at all here for the same exact reason you described: if you’re not able to update firmware, you’re able to update underlying OS as well
The problem here is when you sell LTS solutions for $10M thinking you can pay $1000 for keeping your solution afloat for two decades.
It doesn’t work this way. Operation systems as well as frameworks and runtimes, are constantly changing, because the industry is constantly moving forward. The only way to keep up is to constantly (and regularly) update YOUR software too (and plan the budgets accordingly). It’s your responsibility to fulfil your obligation.
After all, you can still run an up to date Linux (or better NetBSD) on a very old hardware, the problem is that you didn’t update your software regularly to just keep up with changing API/ABI, means didn’t invested too much on a longevity of your product
> Operation systems as well as frameworks and runtimes, are constantly changing
No. They aren't. It's not a natural phenomena out of our control. We decide when to change them. Presently, we make bad decisions. We should learn to make better decisions.
And you are incorrect when you think that pacemaker and similar equipment isn't the problem -- it totally is. Imagine that after ten years the hospital that installed a pacemaker needs to do a checkup or some other maintenance work on it with external equipment. But they were forced to upgrade the external equipment because there weren't any with LTS long enough to allow them to use a certified and vetted copy of. And now they have no way to connect to the older equipment they distributed to patients, equipment they have no means of upgrading, but also no means of dealing with, because they had to upgrade their own system.
Backporting packages is a time-consuming process that actually requires skill. It is insane to expect people to do that for free on a volunteer basis because it makes your situation easier, especially as backporting gets harder the longer you go. RHEL offers 10 years, it costs money, that's your option. Or you pay for the staff to monitor and backport to your particular distro and foot the bill.
It's also maddening to present the issue as "our time is expensive and your time is free". If your field is unable to commit any work at all to updates, fine. UBI Micro is as small as its gonna get and moves everything difficult to the host. That could conceivably run for 15 years if you are careful with how you build it.
10 years of support is standard on Windows, and they might be longer depending on the situation (see: Windows XP) or if you're enterprise and pay up (see: Windows 7).
And then there's what you allude to: A stable Windows environment that will, in most cases, run ancient code mostly fine assuming they were written to specs of the time or it's a very important piece of code (eg: Simcity).
Assuming you are talking about OS kernel API, what if you static link your libc? My understanding is that the linux kernel API famously does not change [0].
Windows has a stable userland API/ABI, Microsoft go above and beyond for backwards compatibility.
Linux does not have a stable userland ABI/API, it's very common for binaries built for one distro version to not work in another distro version due to changes in system libraries (curl, ssh, libc are common ones), there's usually ways to work around these but it's a mess that shouldn't be left to the user.
The solution the Linux commmunity has come up with to solve this problem is containers, such as Flatpak (for desktop apps) and Docker (server apps).
I don't think that is correct. Do you have an example? Linus is adament that "[they] do not break userspace".
> it's very common for binaries built for one distro version to not work in another distro version due to changes in system libraries (curl, ssh, libc are common ones), there's usually ways to work around these but it's a mess that shouldn't be left to the user.
But that would be the same thing on windows. If you dynamically link your executable to an old version of libcurl and run your executable on a windows with a newer version containing a breaking change, you'll be in trouble, whatever your OS.
> The solution the Linux commmunity has come up with to solve this problem is containers, such as Flatpak (for desktop apps) and Docker (server apps).
Flatpak and Docker solves many other problems than kernel API. Actually, it does not solve this problem at all if you think about it because these technologies do not insulate you from the kernel (contrary to a VM).
The statement "Linux does not have a stable userland ABI/API" is about the kernel. If you are talking about glibc you are already in the Linux OS/GNU Linux (whatever it is called) domain.
I think I correctly interpreted Linus when he talked about the kernel API/ABI facing the user (developer of application).
A lot of trouble you are going to get maintaining builds are due to poor tooling, not linux.
But we are getting into semantics here. At the end of the day, I think that if you are careful with what you are doing and think a little before choosing your tools and writing code, you should get very good mileage out of your linux binaries.
I don't know enough about windows to comment on its stability but I very much doubt I could just slide my original Quake CD or 3DS max 4 install into Windows 10. But I might be wrong!
>But that would be the same thing on windows. If you dynamically link your executable to an old version of libcurl and run your executable on a windows with a newer version containing a breaking change, you'll be in trouble, whatever your OS.
Most of the time I can run a program from the 1990s on Windows 11 just fine, you can't say the same for Linux.
The only incompatibility in Windows that happens frequently is if you try to run a program written for a newer Windows version in an older version. That will likely fail because forwards compatiblity isn't a priority. But with regards to backwards compatiblity, Microsoft goes to herculean efforts to make sure Windows programs written several decades ago will work today.
If you statically link to your dependencies or ship the dynamic libraries yourself, like you would need to do on windows, you would get the exact same experience on Linux.
The kernel shouldn't break user space but glibc, Qt, GTK, etc. any upgrade of the most common dependencies will break programs without at least recompiling them.
Windows programs ship with all their necessary libraries and/or link against the Microsoft c++ resist packages and you can have many different versions installed.
Your MSI installer has to install the libcurl.3.dll instead of it being shared with other programs.
Docker and Flatpak fix the userland part the kernel part is already stable by "definition". Stuff may still break overtime but no more than it does on MacOS and Window. It's not broken by design anymore.
Linux kernel API itself isn't really enough to do eg. any UI software, because things like creating windows or showing notifications, are not part of it directly. You need additional software on top of that for those features.
Win32 API is much larger than Linux kernel API and is stable so there is quite large chance that a couple decades old software still works without recompiling or updating libraries. It has its issues with many parts of it being completely horrifying to use, but the stability is something that Microsoft has done well.
Linus does really well with the kernel ABI and API but the problem is really with literally everything else. It's usually impossible to get something to even compile that's more than a few years old let alone run an old binary.
All this talk about what is or isn't stable ABI on Linux is irrelevant to the article anyway. The article's author explicitly doesn't want their 15-year LTS to ever update its OS or libraries. So ABI stability is not a concern.
That's because you are lucky, not because it's designed to be that way.
Windows has very variable support timeline for its releases, and, going forward, Microsoft only looking to cut down on support times, not increase them. So, even if it worked for you during NT era, it's not going to work for people writing for Windows 10 or whatever the present day version is.
Not to mention that Windows is just not a player in many major computer markets, so not even an option for someone who eg. wants to provide storage solutions or HPC and many others.
Perhaps not for medical, but you can coax windows to behave mostly "realtime" (aka 1ms polling interval) by invoking some win32 API around the multimedia timer interface. This still works as of Win11, but it has to be foreground and not running on battery and a few other reasonable constraints.
There are really 2 things you need to avoid: Context switching, and tripping over the internals of the kernel. By forcing the multimedia timer to 1khz, you can address the latter. You should still try to run your time-sensitive code on a dedicated thread and never yield to the OS, even if yielding is made less costly. Keeping the cache hot all the way thru is way more important. If the kernel doesn't have an opportunity to make a decision about your priority, you don't have to worry about a thing. Hold onto the thread of execution and never let go. Async/await is your enemy. Poll everything.
For me, the biggest win is in the development experience. Building this stuff on Windows is like cheating compared to being forced to target an obscure RTOS linux distribution up-front. Even then, if I build my RT product in something like .NET6+, it is plausible I could compile the exact same code for a more power efficient platform after completing initial development on windows. The only code that would be platform-specific are any "hacks" like one-time mm_timer calls.
The fact they run in hospitals doesn't mean it's a good thing... Hospitals were duped into using MS products. Today, this is a major plague in the health system of many countries few people realize the extent of.
In short, the problem MS created for medical / hospital equipment is that by trying to ensure they control the software, they made the hospital staff incapable of using the software to their own benefit. Hospital research today is archaic in its practices compared to virtually any other field because of how it relies on Microsoft products. Usually, it's a bunch of people manually filling Excel spreadsheets and doing a lot of other things that would've been trivial to automate by hand.
Hospital / medical equipment is not validated well by various bodies established to do that (eg. FDA or similar European bodies) because the technology is proprietary and validation, essentially, relies on companies submitting their internal research results to get the equipment approved.
PACS and similar systems are designed and implemented by companies who don't understand how hospitals work, and aren't interested in learning that -- the typical enterprise model inspired and encouraged by the use of MS products, where deals are made between high-ranking managers w/o any attention to the actual needs of the personnel who are supposed to use the software. And this is again, because MS conditioned hospitals not to use open-source software, which also resulted in no internal talent / expertise growth. So, doctors in external clinics and even inside the hospital, even though connected by their computerized system, don't know how to use it well, or, sometimes systems lack important functionality.
By bribing their way into medical system, MS committed the largest crime in its history, much larger than anything it's been taken to courts for. Countless lives have been lost or severely impacted because of MS profiteering. But nobody really talks about it because these numbers are hard to count. It would be very hard to sue them for this as this is too broad and hard to get concrete evidence of... but if you had ever interacted with the hospital computer systems from the inside, you'd see this plain as day.
this is probably machines that handle special hardware.. like MRI, ECG, or monitor for vitals like pulse, blood pressure, blood oxygenation or other stuff like that..
Those machines need to be certified as a whole package and this include the computer that will control it.. and once you certify you cannot make changes to the package without having to certify again..
So you certify the machine with that software on it.. that specific kernel version, that specific libc version and so on.. if you change anything you need to get a new certification..
and now those machines are no longer air gapped because hospitals want to be able to remote monitor the patients vitals from a central nurse station..
> and once you certify you cannot make changes to the package without having to certify again..
Oh you are so naive... You probably wish it worked like that, but in practice it doesn't.
The certification process is completely broken today. First of all, it allows proprietary software / hardware to go through this process (which is the majority of applicants by far). Typically, FDA or similar will ask for company's own research that establishes that the software works with absolutely no way of verifying that it does. They, as well, have no way of ensuring that whatever version was used to produce the research results is the one that's being installed on the hardware shipped to the hospital / patient. Companies producing software routinely patch their software if they discover problems after initial release and don't hesitate to claim that it was approved.
I got this far, wondering if this was satire or not, then realized they just have no concept of the world at large:
> America, some of the EU, England, and some others are considered “First World” countries
Some others? Canada? England isn’t a country. UK is the state, yes, there’s weird terminology, but even as a Brit we don’t class England as a country. I love the implication Wales, Scotland and NI are not first world.
Some of the EU?
There’s so much missing here it’s crazy.
Yes, the article has many many flaws, that was my personal limit.
Besides, when I’m half way through and still can’t tell if it’s a joke or not, I think that’s a red flag.
I need to read this article more carefully at another time, but it comes across as rather confused to me.
> Some time near the end of the 1990s and beginning of the 2000s is when desktop/laptop computer quality began to improve.
What does this mean? Computer quality has continuously improved since their inception. At different times on different axes, and sometimes some things got worse, but on the whole they've always improved.
> Not one of those libraries has Internet.
I can't figure out which libraries are referred to here. Tons of libraries have internet.
The article also seems to be confusing hardware and software a lot. Lots of hardware is thrown away too easily because modern software often tends to assume it's run on modern hardware where it can waste resources. Software should be more frugal in its use of resources and if you do, there's no technical reason you can't have modern, secure software running on 15 year old hardware.
> Computer quality has continuously improved since their inception.
Not really. For example the switch to LCD monitors from CRT was a step back in quality. LCD was just way cheaper and more convenient as in taking less space that regardless of worse quality it still took over. It took many years for the various aspects of image / video quality to catch up.
There are plenty of similar examples where price decrease was what driving the industry rather than quality. In general, shrinking form-factor brought lots of drops in quality, because it's harder to make various electronic parts smaller and as durable. Opening markets to wider range of players (eg. "the PC revolution") is also bound to bring drops in quality as many inexperienced players will be seeking to enter the market and typically contest the lower bound of quality.
It's only when the industry stabilizes on some standard and creates a group of experienced players who compete for the same niche is when it's possible for the quality to continuously increase. Did something like this happen in the 90s and 00s? -- I don't know. From software perspective, I think about those years as the time when the decline started to accelerate, and I see us still on the decline of software quality in general. But I don't know what the author is referring to.
> LCD was just way cheaper and more convenient as in taking less space that regardless of worse quality it still took over.
They were definitely not cheaper at first. But taking up less space is also a quality. You often get trade-offs like that. You often get things that are faster, but they produce more heat. Early SSDs were faster, but had less capacity than HDDs. It's still part of progress, and these technologies often exist side by side for a while before for a lot of people, the trade-off means they prefer the older tech, until the newer tech catches up on the attributes it was less good at.
> In general, shrinking form-factor brought lots of drops in quality, because it's harder to make various electronic parts smaller and as durable.
Again, different aspects of quality. The fact that computers now fit in your pocket instead of taking up an entire room is progress. It enables new ways to use them. Early laptops were not an attractive replacement for many desktop uses, but now they generally are. But still not quite for all (I just bought a new desktop PC).
Let's be adults here. When we are talking about a quality of monitor, we are talking about colors, refresh rate, precision etc. Form factor is one of the least important metrics.
Don't even try to pretend they are somehow comparable in terms of quality. It's making an argument for the sake of making an argument. It's like claiming that fast-food may be better in quality than home-cooked food because it's faster to "cook".
> Not really. For example the switch to LCD monitors from CRT was a step back in quality. LCD was just way cheaper and more convenient as in taking less space that regardless of worse quality it still took over.
That pretty much depends on quality metric. I switched (in late 90s) from big 17" CRT to 14" (TN) LCD just because it has incomparably sharper image and no issues with refresh flickering.
Yes, it has crap viewing angles and contrast (and probably also colors), but that does not really matter as much as sharp image.
> just because it has incomparably sharper image and no issues with refresh flickering.
You probably had an extremely bad CRT, but got a very good LCD monitor. Typically, the situation would've been the exact opposite of yours. Early LCDs were incapable of supporting high refresh rates of high-end CRTs (that's why for a long time people in e-sports were using CRTs). Also, the physical "pixels" of best LCD monitors were still bigger than those of CRTs. This is what eventually resulted in OTF fonts adoption with a lot of trickery in sub-pixel interpolation... which sucked a lot when it just started, and still sucks, but not as much.
Everyone commenting here how author wants 15 years support for free hasn't read the article.
>Those of us creating medical devices, software for factory lines, and systems that matter need a minimum of 15 years for an LTS especially if you have it in your fool head we should pay money for it.
Emphasis mine.
Author is saying, rightfully, that "long term" support needs to be more than 5 years. Author is also saying, reasonably, that it should be at least 15 years if the developers in question want to be paid for support.
Author is saying he will pay for long term support if that's what it takes, but not if it's any shorter than 15 years.
They're implying they wanted to pay for 15 years support but found no takers. Is the problem that nobody wanted to support them for 15 years (their insinuation) or that they didn't pay enough? Given they keep talking about 5-year LTSes without bringing up any of the 10-year LTSes that do exist (Freexian, RHEL, etc), it seems clear to me that it's the latter.
Also their requirement is nonsensical. They want support, ie bugfixes, which will require them to install updates. But at the same time they don't want a model that will update them to the latest supported version. Why do they care what the content of the update is? The point of the support contract is that they will be supported regardless.
Also also, speaking as someone who works at a company supplying software to car manufacturers, we don't have 15-year LTS for our software either and the manufacturers don't have a problem with it. Their stacks are being designed around automatic updates, which again they had to be anyway to get bugfixes in general. Support contracts are support contracts. Whether an update changes v1.0.0 to v1.1.0 or v2.0.0 makes no difference to them. So what the author says about car manufacturers is also nonsense.
I can run 30 year old programs on my x86_64 machine just fine. What the heck is the author talking about. X86 is one of the beacons of long term support. I upgrade computers every 10 years or so. And never had issues. Ever.
The tone and quality of this article is atrocious. Let alone demanding that software developers (the very ones the author spends a chunk of the article belittling) work for free for 15 years. There is more nuance than upgrading everything 6 months/yearly versus being fine with still using Windows XP 20 years later.
As an aside it feels not only blogs but news articles from places like the BBC and NPR have several glaring spelling and grammatical errors. The quality of writing either has gone down significantly the past few years or is it my standards of writing being higher now?
Pre internet you would do the same thing with your computer every day for 15 years and then buy a new one, you could load the same version of MS Word 15 years later no problem.
But when you're dealing with the rest of the world, a) you need to keep up, people send you Word n+15 compatible files, b) you want to access new websites, and c) you don't want to get hacked.
If you want to run your offline computer for 15 years, that still works today
ITT: surprisingly many people think that software grows on trees and is harvested twice a year to get new versions of it. And that if you miss your harvesting cycle you might as well close your software farm and just kill over and die.
The way I see it, the primary motivation for software longevity should be the hardware longevity. Ideally, software needs to outlive the hardware rather than the other way around. But we came to expect ridiculously short version iterations because this process was a major profit driver for software companies.
If modern hardware happens to be more reliable than what was the norm 20-30 years ago, then, naturally, we should want the software to adjust to service the modern hardware. And if this doesn't happen -- it's a problem. In this instance, it seems to be an indication of companies big and small trying to sell worse quality software than what was reasonably possible while duping the audience into believing that the quality is still good.
Well, of course it is upsetting! Why would you even think otherwise?
In this day and age it should be possible to not tightly integrate the computer into the device you are building. That way you can update the hardware and stay current with software. Building things with computers is just an ongoing investment into updates and staying current. Because the longer you push it away the harder it gets to update.
Red Hat will sell you support contracts for up to 10, more if you pay them enough and promise not to talk about it. That Ubuntu isn't as business focused as Red Hat is... Well, there's a reason Red Hat is a billion dollar company and Ubuntu isn't. Not that there's anything wrong with that.
The article is full of unjustified hatred for modern software practices. The author fails tô acknowledge the amount of failures of big projects and escalating costs, focusing only in the fact that today practices output less robust software.
Hatred for modern software practices is fully justified.
Unfortunately we didn't have enough of historical evidence yet, nor did we have enough shifts in software development practices to tell if we had it much worse or much better before, but there are obvious shortcomings of what we do today compared to 20-30 years ago.
The major shortcomings, the way I see them:
1. The overall software quality is on steady decrease since late 70's. This is, at least in part, due to more software being developed for less mission-critical systems and consequently by less qualified personnel. So, commercially, we don't need as much of higher-quality software as we did before.
2. Software industry is exceptional in the way it relies on multiple other players in the industry for individual production and as such is a lot more susceptible to trends and more influenced by majority than many others. I.e. of the majority of the industry decides they don't need long-term support, then the minority won't have it because the minority's survival is contingent on the policy and services (inadvertently) provided by the majority.
3. Software quality is exceptionally difficult to measure formally. This leads to big and small players, intentionally and unknowingly, mis-advertising and misunderstanding the quality of their own products. This makes it very hard to almost impossible to price your products or services based on quality. The realistic alternative to quality the companies are using is the ability of the product / service provider to support the customer in the case of failure. Which is a function of size of a company provider more than anything else. This leads to the situation where instead of focusing on understanding and improving software quality, companies develop reputation though marketing and customer support, which they later misrepresent as software quality.
All the above is the result, at leas in the large part, of the modern software practices. We created the practices that emphasize speed of delivery over every other aspect of software because we don't know and don't want to figure out how to reliably increase any other metric.
We created a lot of monopolies in software world by following the modern development practices which preach using commonly used third-party components, even of lower quality over bespoke alternatives. This is especially motivated by cost savings when it comes to human resources, which become more interchangeable (and cheaper in the short term for the employer) with this approach.
Software monopolies stifle innovation and fix prices not only in the software field, but they also adversely affect the hardware field which has to adjust itself to the software practices.
We could be doing better if we understand the shortcomings of our current process -- these aren't the necessary consequences of making software, but in order to effect a change initial good will is needed from multiple players, which is hard to accomplish w/o these multiple players, at least, first acknowledging the problems.
Since it’s not actually mentioned in the article and I found myself quite confused, “LTS“ means “long-term support“. A pet peeve of mine is when bloggers write articles without defining in a single simple parenthetical what the acronyms are.
The article is full of shit, the author wants free LTS support for 15 years so he can more easily make money...
The quote is:
Those of us creating medical devices, software for factory lines, and systems that matter need a minimum of 15 years for an LTS especially if you have it in your fool head we should pay money for it. Over that course of time you cannot drop our operating system just because you want to.
If his company wants 15 years of LTS support for their products they should find someway of paying for it....
Absolutely this - expecting free support for no reason is a ridiculous notion to build your own business on.
Even then, I think 15 years of support for a Linux distribution is a bad idea anyway - software just moves way faster than that. How supported is that release really going to be after 14 years, even with the best of effort?
Is every upstream package going to be version compatible with a 15 year old release with no breaking changes? And if it's not, and you patch around that or freeze those packages, is your software still "supported"? What if there are unpatched critical issues? Is that a better idea than keeping up-to-date with a rolling 24/36/48 month release cycle?
In the analogy of a roof or a basic appliance that's fine, but the long-term support equivalent of that is have a copy of the software on a couple of USB drives and disconnect the machine from the internet completely. As long as you don't do anything novel with it, it'll probably work for decades. There are still Windows 3.1 terminals in the wild somewhere.
> Even then, I think 15 years of support for a Linux distribution is a bad idea anyway - software just moves way faster than that.
Have you asked yourself why this is the case and why it should be the default accepted case?
> Is that a better idea than keeping up-to-date with a rolling 24/36/48 month release cycle?
There are plenty of devices out there that exist for longer than 2 or 4 years. And those "rolling upgrades", how are you going to deal with "we're dropping 32-bit support"? Or "we're dropping support for CPUs that don't support certain features". Or a hundred other deprecations and the inevitable increase in bloat of software?
If it’s a device/setup that is self contained, and not exposed to e.g. the internet, then …. It’s already 32 years LTS! Just use it.
Ah, but occasionally you do need something new. Well, in most cases, the LTS support won’t help you - because they just fix bugs, but do not e.g. provide TLS 1.3 to replace the aging OpenSSL 0.96 which barely supports SSLv2.
Additionally, do you have LTS hardware replacement for 15 year old hardware running Linux? Iirc Dell, HP and friends only provide a 5 yr guarantee - and that guarantee is “we’ll give you a new machine if we don’t have replacement parts” which you also have with Linux (just install the latest)
There is a very good reason. It is hard to split bugfixes from features. Sometimes it is even very hard to fix a bug without changing the behaviour. It is unreasonable to expect everyone to backport their fixes to all old versions.
There is a very good reason: the maintainers want to make the change. They don't even need a good technical need to do it, but usually there is at least something that drives the change.
It’s something the maintainer made and licensed in such a way that you can use it for free. The special status concerning “his” code is that it generally is in fact “his” (or her) property.
If you don’t like it you’re free in most cases to fork it and suit yourself.
Either way, it still doesn’t change the fact that there are going to be upstream bugs and vulnerabilities that remain unpatched if you’re trying to keep something frozen for 15 years.
An open source maintainer is a person volunteering their time and resources to offer something usually free of charge. As such, they decide what usecases they want to support and for how long. You can't expect someone to do work for free to support you.
Either pay someone to maintain it for as long as you need or engineer around whatever issues you encounter; it's a you problem, and possibly the development process or software stack you are using is not flexible or robust enough to support your own usecase. Unless you specifically pay for how many years of support you need, then you don't get to decide the terms of whatever support is offered for free.
In software industry we rely on the community to carry the large share of our "expenses". A company that manufactures medical equipment, let's say, pulseox, would normally need well... maybe five engineers working on the project. If suddenly, they also need to maintain a Linux distribution used for their device, they'd need like fifty. This would make pulsox prohibitively expensive and the company would go out of business. Only companies who'd be able to manufacture pulseox would be the companies who are already doing Linux maintainer work internally. Which would eventually lead to a monopoly, and the prices would still shoot through the roof.
But it doesn't have to be this way. If we collectively understand that there's a problem with the absence of real long-term-support releases, then many smaller companies can survive and produce cheap equipment.
Finally, there's no "natural" rate at which software needs to upgrade. There's no reason it shouldn't upgrade slower or faster. The claim the author makes is that hardware used to fall apart in about five years, and that could've been used to justify the short-term support it was getting from software... about 20-30 years ago. But things changed, and we learned how to make hardware that easily lasts 15 years and now we need to re-think how we are dealing with this fact.
I dont think its stupid advice to ask people to choose their tools based on their needs.
If you need 15 year support you choose software that will provide that and not something that does not promise that.
Just as if you are building a house in greenland you would choose different building styles and material compared to building a house in california. His complaints seem more like building a cheap california house in greenland and then complaining that its a bad house.
Society or companies are paying in many cases for LTS <<15 yrs. So at least there should be a price tag to compare the options.
If I buy a car (which is a lot of software these days), I expect something like 20 yes LTS. So it is simple as that software support needs to be tied to reasonable hardware lifetime otherwise it is planned obsolecence (at least if you cannot run it air gapped). 5yrs is reasonable for servers, phones or toys I guess. But particularly many embedded systems that worst case have been certified need longer support (medical, automation, transport, ...). Maybe better circular economies for HW would be a better way out in the future. Maybe we could even safe resources if e.g. energy expensive computing parts of HW could be replaced. The problem is current certification proceedures ansld related costs I guess.
The cost of maintaining old versions is constant, spread over an ever-dwindling customer base which is why the price grows exorbitant. That said, Bull still maintains the GECOS mainframe OS that was already obsolete when it gave UNIX the pw_gecos field in the 70s, but you have to pay for it.
> The article is full of shit, the author wants free LTS support for 15 years so he can more easily make money...
The article doesn't say that. It says that proclaimed LTS releases aren't such. The article says that all we have is short-term-releases, it doesn't go into details on how much we should pay for the lengths of support of release at all.
I don't see why is this a problem to want to make more money, the easier the better, imo. The author never claimed that they aren't willing to pay for any services though, so you are putting words in their mouth. But, let's for the moment say they wanted to pay for real LTS releases instead of what we have today -- they need to convince many people to share the brunt of doing so, otherwise it's going to be very expensive for them. Their disappointment comes from the perception that a lot of those who would potentially benefit from sharing this expense with them don't do that by following short-term gains and disregarding long-term losses.
---
From my personal experience: we dig ourselves a hole with creating short release cycles. We ended up in a situation where every new iteration of software we release is of despicable quality and there's no hope of improving the quality because the next release is already on the horizon and most and best resources are dedicated to it. Releasing often is seen as a primary driver of profits. This acts to demotivate and devalue QA teams, who, in the present day are seen more and more as a waste of resource and are even sometimes repurposed to do things unrelated to QA, s.a. release management or even customer support. And, I think, this is what the author talks about when they talk about how agile is a bad thing.
The ultimate claim the author makes is that we don't really need short release cycles. Not the individual users neither the companies no matter how big. We'd benefit from higher quality software released less often. We don't need to pay extra for longer term support as you put it. What you perceive as longer term should be the norm. We chose the wrong driver for companies profits -- selling new versions works, but leaves customers more and more dissatisfied with the results. Perhaps, we should think about a different way to pay for programmers' work? Perhaps instead of version upgrades we need to sell maintenance? -- I don't know, and am not enough in the know of how the profit is made in this instance and where it's possible to take it.
I think his comments like "if you have it in your fool head we should pay money for it" says that he expects it for free.
I also think if the products he produces need 15 year support he should choose operating systems that provide that, for example QNX comes close. Instead he chooses to complain about a OS that have never ever promised anything like this and that he can get for free.
This has been shown to be bullshit times and again, but people keep repeating this as if it was a valid argument -- why do you think author or anyone else has a choice of an operating system they can install? Based on LTS policy? Really? On what planet do you live?
We don't live in a world with infinite supply of interchangeable operating systems that only differ in one interesting characteristic. For most software products out there there either isn't a choice at all, or the choice is between bad and worse.
But this isn't even the point. Someone, well, a group of people, made a decision about how long the support for the system should be. OP believes they were wrong. Telling them to try a different system solves nothing, even if that worked for them. The conflict is still there: someone made a bad decision, even if OP no longer actually needs to rely on their decision, it's still bad.
Did I read this correctly?