> Viewed through the lens of digital autonomy and citizenship, the question isn’t simply “Is Linux perfect?” but rather: Do we want our fundamental computing environment to be ultimately under our control, or controlled by private interests with their own incentives?
As a user of Linux as my main desktop OS for more than 20 years, a user of Linux far longer than that, and a promoter of FOSS before that was a term, this has always been the question. Most of the world does not care. I suspect that is more true today than ever before. There are now adults that grew up in the age of social media that have no idea how local computing works.
Not to be negative but the "obstacles" to adopting Linux were never actually obstacles most of the time. Fifteen years ago my mother started using Linux as her main OS with no training. I gave her the login information, but never had a chance to show her how to use it, and she just figured it out on her own. Everything just worked, including exchanging MS Office documents for work.
> Most of the world does not care. I suspect that is more true today than ever before. There are now adults that grew up in the age of social media that have no idea how local computing works.
Yep. I was amazed when I was talking to a friend who's a bit younger (late 20s) and told him about a fangame you could just download from a website (Dr Robotnik's Ring Racers, for the record) and he was skeptical and concerned at the idea of just downloading and running an executable from somewhere on the internet.
I suspect most adults these days are like this; their computing experience is limited to the web browser and large official corporate-run software repositories e.g. app stores and Steam. Which ironically means they would do just fine on Linux, but there's also no incentive for them to switch off Windows/MacOS.
To them, Microsoft and Apple having control of their files and automatically backing up their home directory to Azure/iCloud is a feature, not a problem.
> Ironically, being concerned and skeptical about running random executables from the internet is a good idea in general.
I agree you shouldn't run random executables, but the key word is "random". In this case, Ring Racers is a relatively established and somewhat well-known game, plus it's open-source.
It doesn't guarantee it's not harmful of course, but ultimately for someone with the mindset of "I should never run any programs that aren't preapproved by a big corporation", they may as well just stick to Windows/MacOS or mobile devices where this is built into the ecosystem.
Open-source only matters if you have the time/skill/willingness to download said source (and any dependencies') and compile it.
Otherwise you're still running a random binary and there's no telling whether the source is malicious or whether the binary was even built with the published source.
It's no guarantee, but it's a positive indicator of trustworthiness if a codebase is open source.
I don't have hard numbers on this, but in my experience it's pretty rare for an open source codebase to contain malware. Few malicious actors are bold enough to publish the source of their malware. The exception that springs to mind is source-based supply chain attacks, such as publishing malicious Python code to Python's pip package-manager.
You have a valid point that a binary might not correspond to the supposed source code, but I think this is quite uncommon.
> It's no guarantee, but it's a positive indicator of trustworthiness if a codebase is open source.
It's something we as techies like to believe due to solidarity or belief in the greater good, but I'm not sure it's actually justified? It would only work if there's a sizeable, technically-inclined userbase of the project so that someone is likely to have audited the code.
If you're malicious, you can still release malicious software with an open-source cover (ideally without the source including the malicious part - but even then, you can coast just fine until someone comes along and actually checks said source). If you're anonymous there is little actual downside of detection, you can just try again under a different project.
Remember that the xz-utils backdoor was only discovered because they fucked up and caused a slowdown and not due to an unprompted audit.
> It would only work if there's a sizeable, technically-inclined userbase of the project so that someone is likely to have audited the code.
Not really. There's a long history of seemingly credible closed-source codebases turning out to have concealed malicious functionality, such as smart TVs spying on user activity, or the 'dieselgate' scandal, or the Sony rootkit. This kind of thing is extremely rare in Free and Open Source software. The creators don't want to run the risk of someone stumbling across the plain-as-day source code of malicious functionality. Open source software also generally makes it easy to remove malicious functionality, or even to create an ongoing fork project for this purpose. (The VSCodium project does this, roughly speaking. [0])
Firefox's telemetry is one of the more high-profile examples of unwanted behaviour in Free and Open Source software, and that probably doesn't even really count as malware.
> If you're malicious, you can still release malicious software with an open-source cover (ideally without the source including the malicious part - but even then, you can coast just fine until someone comes along and actually checks said source).
I already acknowledged this is possible, you don't need to spell it out. Again I don't have hard numbers, but it seems to me that in practice this is quite rare compared to malicious closed-source software of the 'ordinary' kind.
A good example of this was SourceForge injecting adware into binaries. [1]
> Remember that the xz-utils backdoor was only discovered because they fucked up and caused a slowdown and not due to an unprompted audit.
Right, that was a supply chain attack. They seem to be increasingly common, unfortunately.
Of course this is true. But you can keep going down the rabbit hole. How do you know there isn't a backdoor hidden in the source code? How do you know there isn't a compromised dependency, maybe intentionally?
Ultimately there needs to be trust at some point because nobody is realistically going to do a detailed security analysis of the source code of everything they install. We do this all the time as software developers; why do I trust that `pip install SQLAlchemy==2.0.45` isn't going to install a cryptominer on my system? It's certainly not because I've inspected the source code, it's because there's a web of trust in the ecosystem (well-known package, lots of downloads, if there were malware someone would have likely noticed before me).
> still running a random binary
Again "random" here is untrue, there's nothing random about it. You're running a binary which is published by the maintainers of some software. You're deciding how much you trust those maintainers (and their binary publishing processes, and whoever is hosting their binary).
The problem is that on Windows or your typical Linux distro "how much you trust" needs to be "with full access to all of the information on my computer, including any online accounts I access through that computer". This is very much unlike Android, for example, where all apps are sandboxed by default.
That's a pretty high bar, I don't blame your friend at all for being skeptical.
Right, which goes back to the main point; "total control of your computing environment" fundamentally means that you are responsible for figuring out which applications to trust, based on your own choice of heuristics (FOSS? # of downloads/Github stars? Project age? Reputation of maintainers and file host? etc...) Many, maybe most people don't actually want to do this, and would much rather outsource that determination of trust to Microsoft/Google/Apple.
> Right, which goes back to the main point; "total control of your computing environment" fundamentally means that you are responsible for figuring out which applications to trust, based on your own choice of heuristics
Hard disagree. Total control of my computing environment would be to allow an application access to my documents, a space to save a configuration, perhaps my Videos folder or even certain files in that folder. Or conversely, not.
At the moment, none of the desktops give me the ability to set a level of trust for an application. I can't execute Dr. Robotniks Ring Run (or whatever the example was) and be able to specify what it can, or cannot access. There may be a request for permission at a system level access, but that could be explained away as usually is for iApps and Android when requesting some scary sounding permission groups.
And it also doesn't stop malware from accessing my documents. Sometimes my Mac asks if an application is allowed to access Documents, but it isn't consistent.
> they are hidden away inside the settings, and they are not granular.
The switches default to off though, with a prompt on first attempt at accessing the protected resource.
The problem is that they're leaky like a sieve and the permission model and inheritance works is unclear (I once had the Terminal app ask me for permission - does it now mean anything I run from the terminal automatically inherits it - and so on).
> Open-source only matters if you have the time/skill/willingness to download said source (and any dependencies') and compile it.
Not really. The fact that an application is open-source means its originator can't rug-pull its users at some random future date (as so often happens with closed-source programs). End users don't need to compile the source for that to be true.
> Otherwise you're still running a random binary and there's no telling whether the source is malicious or whether the binary was even built with the published source.
This is also not true in general. Most open-source programs are available from an established URL, for example a Github archive with an appropriate track record. And the risks of downloading and running a closed-source app are much the same.
The kind of rug-pulling you describe only works if the software implements an online licensing check/DRM, and either way has nothing to do with security against malicious behavior.
> Github archive with an appropriate track record
How do you judge the "track record"? Github stars can be bought. Marketing can be used to inflate legitimate usage of a program before introducing the malicious behavior.
> the risks of downloading and running a closed-source app are much the same
But that's my point - open-source doesn't really change the equation there unless you are actually auditing the source and building & running said source. If you're just relying on a binary download you're no better than downloading proprietary software in binary form.
> The kind of rug-pulling you describe only works if the software implements an online licensing check/DRM, and either way has nothing to do with security against malicious behavior.
My point was that an open-source program cannot rug-pull its users without the obvious remedy of forking the project and removing the offending code. Open-source: commonly seen. Closed-source: not possible and often illegal.
For both options, you have to trust the source, which makes that a non-issue. You can checksum the Linux kernel to satisfy yourself that it came from a trusted source. You can checksum the Windows kernel to satisfy yourself that you're about to be screwed.
> But that's my point - open-source doesn't really change the equation there unless you are actually auditing the source and building & running said source.
In the open-source world, knowing how computers work is essential. In the closed-source world, knowing how computers work is somewhere between pointless and illegal. This is how open-source "changes the equation."
Modifying open-source code is welcome and accepted. Modifying closed-source code breaks the law. Take your pick.
As a threat model, it's a little outmoded. With the browser being an application platform, you do this every time you click a link. There's something sinister about considering a local open source application as dangerous but a closed source "cloud" app as safe.
To be fair, downloading and running random executables from the internet is a genuinely terrible security model when the OS (like Windows, Linux, or (to a lesser extent) MacOS) does nothing to prevent it from doing anything you can do.
> It's quite concerning that you frame this as a bad idea.
Downloading and executing other people's compiled software is how things worked for many decades. It's only been in recent years that people have come to believe that Google/Microsoft/Apple should be the final authorities on which executables are safe to run.
> It's only been in recent years that people have come to believe that Google/Microsoft/Apple should be the final authorities on which executables are safe to run.
I mean, the OS package repositories at least have some vetting and processes behind them, as opposed to a random website found online - so there's surely middle ground there!
You don't need to give everything up to a few big corpos to have at least a bit of a sense of security - I personally trust the Debian maintainers quite a bit due to their track record, short of bad actors infiltrating the community or accounts getting compromised.
> Most of the world does not care. I suspect that is more true today than ever before
100% of the people I have spoken with, from uber drivers to grandparents, have all noticed, hated, and are sympathetic to the fight against the rental/subscription economy. In 2025 I don't think I've had a single person defend the status quo because they all know what's coming.
I think Arduino and RPi demonstrate that there is still a relatively strong attraction for tinkering. In the past, freedom meant a lot to tinkerers. My sense is that this is not so true today. Perhaps I am wrong. It may be that few people respect licensing enough to care. As long as somebody (not necessarily the producer) has made a youtube video of how to hack something, that's good enough.
This was probably always true. Replace youtube with Byte magazine and it was probably the same 45 years ago. I wonder if the percentage of true FOSS adherents has changed much. It would be a bit of a paradox if the percent of FOSS software has exploded and the percent of FOSS adherents has declined.
Note: I mean "adherent" to mean something different than "user".
> I think Arduino and RPi demonstrate that there is still a relatively strong attraction for tinkering
Raspberry Pi is an interesting example because it is constantly criticized by people who complain about the closed source blobs, the non-open schematics, and other choices that don’t appease the purists.
Yet it does a great job at letting users do what they want to do with it, which is get to using it. It’s more accessible than the open counterparts, more available, has more guides, and has more accessories.
The situation has a lot of parallels to why people use Windows instead of seeking alternatives: It’s accessible, easy, and they can focus on doing what they want with the computer.
The problems with SBCs are primarily software. I have a ton of SBCs, mostly Raspberry Pis and OrangePis.
OrangePi boards are great. Zero is almost stamp sized, plus and pro has tons of options and on-board NVMe + fast-ish eMMC with great official cases, whatnot.
But, guess what? The OS is bad. I mean, unpatched, mishmashed, secured as an open door bad.
You get an OS installation which drops you to root terminal automatically on terminal output. There are many services which you don't need on board. There's an image, not an installer, and all repositories look to Chinese servers.
Armbian is not a good solution, because it's not designed to rollover like Debian and RasberryPi OS. So you can't build any long-term system from them like you can build with RaspberryPi.
On top of that, you can't boot anything mainline on most of them because either drivers are closed source, or the Kernel has weird hacks to make things work, or generally both.
So, what makes Raspberry Pi is not the hardware, but software support.
I don't think tinkering is the dominant culture behind tech anymore, but it's definitely operating at a higher scale than ever before. There's more OSS projects than ever, and there are tons of niche areas with entire communities. Examples could include: LoRa radios (or LoRA adaptors!), 3d printing, FPGA hacking, new games for retro hardware...
There was a gap before (think 90s and early 2000s) where there was a niche tinkering and more mainstream user/power user/programmer crowds. All these groups have knowledge gaps between them, but the gap was surmountable.
Now, the groups have drifted apart. Even if you're a programmer, unless you care or get excited about the hardware, you don't know how things work. You follow the docs, push the code to magical gate via that magical command, and that works. It's similar even for Desktop applications.
When you care about performance, and try to understand how these things work, you need to break that thick ice to start learning things, and things are much more complicated now, so people just tend to run away and pretend that it's not there.
Also, since the "network is reliable, computing cheap" gospel took hold, 90% of the programmers don't care about how much performance / energy they waste.
I'm guilty of this. I started with a C64 and love hardware and programming, but modern CPUs and MCUs are so complicated I can't be bothered learning about them.
The old 8-bit Arduinos were pretty understandable, but with an ESP-32 I just assume the compiler knows what it's doing and that the Espressif libs are 'good enough'.
You are right. Most will never care. I think of it like, lets try to keep the lights on for the folks that inevitably get burned and need an escape hatch. Many will not, but always some will. At least that's my way of not being a techno-nihilist.
> They like it given a chance. My daughters for example far prefer Linux to Windows.
The two topics are orthogonal. GP talks about "local computing" vs. "black box in the cloud", the difference between running it vs. using it. You're talking about various options to run locally, the difference between running it this way or that way.
Linux or Windows users probably understand basic computing concepts like files and a file system structure, processes, basic networking. Many modern phone "app" users just know what the app and device shows them, and that's not much. Every bit of useful knowledge is hidden and abstracted away from the user. They get nothing beyond what service the provider wants them to consume.
> There are now adults that grew up in the age of social media that have no idea how local computing works.
Very few people of any age understood how local computing (or any computing) works. There's probably more now since most of the world is connected.
Profit scale has reached a point where commercial OS creators have to do stuff like shove ads into the UI. There's probably more legitimate need from non-developers to use Linux now than ever before, just to get a better base-line user experience.
>> Do we want our fundamental computing environment to be ultimately under our control, or controlled by private interests with their own incentives?
Define "our".
Because having general compute under developer/engineering control does not mean end-users want, need, or should, tinker inside appliances.
So there are two definitions of our: our end-users, and ourselves the engineers.
Worldwide, in aggregate, far more harms come to users from malware, destroying work at the office and life memories at home, than benefits from non-tech-savvy users being able to agree to a dialog box (INSTALL THIS OR YOUR VOTING REGISTRATION WILL BE SWITCHED IN 30 MINUTES!!!) and have rootkits happen.
Our (hackers) tinkering being extra-steps guardrailed by hardware that we can work within, to help us help general computing become as "don't make me think, and don't do me harm" as a nightstand radio clock, seems a good thing.
Not hard to see through the false "only two cases" premise of the quote, however un-hip to agree so.
The key part of your post is "has to tell people". Absolutely nobody on SO was obligated to respond to anything. The toxicity was a choice, and those doing it enjoyed it.
They were on it in 2012! And no, it didn't help. The site's value was in providing users with answers to their questions, but that was never on the radar of the folks running SO. I guess they just assumed the free labor of the people that built the place would continue forever.
Edit: I even told them at the time that this was not the right way to proceed. They had no reason to close questions or even review them. All they needed was a holding pen where first-time questioners posted questions, and they needed to be promoted by users viewing them as worthy of an answer.
This topic has been beaten to death, but the defenders of the toxic environment (including the SO founders) have never understood why the environment they promoted killed SO.
It's not because people disliked the rudeness, though that didn't help and was never necessary. It's because the toxicity became the only goal. And the toxicity replaced both accuracy and usefulness.
I wanted to learn web development during the lockdown of 2020. First thing I learned was that SO was completely useless. Everything on there was wrong. Everything was wrong due to incompetent moderation that prevented bad information from being corrected. Everything new marked as a dupe even when the other post was wrong or irrelevant.
That killed SO. It was a decision of the founders and the mods. They took great pride in doing it, too.
The remaining community understands perfectly well why people have left. It's just deemed irrelevant.
Way too many people joined the site, way too quickly, based on a false understanding of the intended premise. Gradually they filtered themselves out as they realized the mistake. Those few who saw value in the actual idea remained.
It was always going to be relatively few people who truly want to participate in something like that. And that's fine. Ideas don't have to be popular to be worthwhile or valid.
Exactly. I dare to say that it was a matter of UX. The experience i got was that you arrived at a SO question from Google and the accepted answer was from years ago and irrelevant at the present time.
They did a quick patch by letting you sort answers by some attribute, but darn, that's low effort product/ux dev. What did those teams do in 10 years??
I know it was probably said just as a joke, but are you really writing papers using Rust? I don’ t use Rust, BUT if you’ve got a better way to write symbol heavy type theory and/or logic than having to make PNG’s and put them in as images in a word processor I would love to hear about it.
Hate to argue with people on the internet, but your graph doesn't actually show what you claim. The TeX data was stable until late 2021, whereas the SO decline started in 2017. I also would expect some correlation so that SO was a drag on the TeX site.
I would ascribe that to these communities evolving differently. There is no reason to assume that the popularity of LaTeX tracks the popularity of programming languages. It's a type setting system. And that doesn't even take into account communities that exist parallel to SO/SE. Surely there exist communities today for LaTeX that have been around since before SO began its life.
A lesson can be learned here. If you don't introduce some form of accountability for everyone that influences the product, it eventually falls apart. The problem, as we all know now, is that the moderators screwed things up, and there were no guardrails in place to stop them from killing the site. A small number of very unqualified moderators vandalized the place and nobody with common sense stepped in to put an end to it.
> It sucks that the copyright period in the US is so ridiculously long
Do you mean for humans? We have this weird asymmetry where copyright law applies to humans but a large company training an LLM is not subject to copyright law at all.
Honestly, I wouldn't consider publishing a book if it didn't have that information. There's no reason to give up half or more of the potential market for a book because it's arbitrarily pitched at advanced users. Assuming the customer knows how to use pip would be crazy.
Right, but you won't receive 8x the benefits that your payroll taxes will eventually pay for, either. In fact, your wife will probably receive a higher percentage of her FICA contributions in eventual SS benefits than you, because there's a slight element of redistribution built in; it's not a straightforward "you get back what you paid into".
For me, the much more concerning part of Social Security is the demographic challenge: the program started out with over 10 workers per retiree and is down to less than three[0]. It doesn't matter how you play with the sliding scales of who pays how much and what the earnings cap is, when in the end it's two to three working people's wages being taxed to support one retiree.
That's okay. I also won't receive as much benefits from my income taxes as my aunt who has a serious brain injury and can't take care of herself. I don't exactly need the money.
As a user of Linux as my main desktop OS for more than 20 years, a user of Linux far longer than that, and a promoter of FOSS before that was a term, this has always been the question. Most of the world does not care. I suspect that is more true today than ever before. There are now adults that grew up in the age of social media that have no idea how local computing works.
Not to be negative but the "obstacles" to adopting Linux were never actually obstacles most of the time. Fifteen years ago my mother started using Linux as her main OS with no training. I gave her the login information, but never had a chance to show her how to use it, and she just figured it out on her own. Everything just worked, including exchanging MS Office documents for work.
reply