Anecdata is strong, I have multiple cases myself just from browsing this morning.
But I'm leaning towards incompetence. Some US generated stuff was most likely moved to Oracle shitboxes, causing encoding issues and unreliable streaming.
...or it's malice and they're scanning the data and intentionally throttling traffic for unwanted content.
Data? No. None of these companies are making their data freely available for analysis or being transparent about how their algorithms work. People have complained for a while that Twitter / X seems to suppress the visibility and reach of profiles or posts that disagree with Musk’s views. The recent open sourcing of their algorithm is meaningless since there’s no evidence of what they actually have in production or what data / configuration is used with it.
So the best we can do is anecdotal examples. And it’s also obvious that Trump avoided banning TikTok for months, illegally, because he wanted to have another platform serve as a mouthpiece. He now has that by forcing a sale of TikTok to his friend, Larry Ellison.
As another anecdote in favor of Universal Blue's approach. My mother (who can't use a computer but to check email or regular websites) has been swapped to Aurora and has nothing but positive feedback.
To be fair, the people who barely use the computer are the easiest to move to Linux. As Mental Outlaw said, "to a normie, an OS is just a bootloader for Google Chrome". If all you do is check emails, it doesn't really matter what OS you have installed.
Switching to Linux hasn't been an issue for those users for a long time - it's usually gamers, users of professional software, or IT people with deeply established workflows who have troubles
I guess the only part that matters is updates, and atomic systems like Fedora Silverblue do allow you to enable automatic updates without the fear of breaking everything, which is great
Doesn't matter though. Every single one of these "casual" users I know has a terribly outdated device with a broken battery that doesn't even charge anymore.
Laptop battery is mostly an issue of inefficient CPUs nowadays. I don't know about other distros, but at least Fedora's default power saving settings give a battery life very much comparable to Windows. Which is obviously still nothing compared to macbooks or even snapdragon laptops.
not the same thing at all. Different userspace that may or may not be that efficient at power, as well as well tested power management in the kernel for specific devices.
My old man was using Ubuntu 20 years ago because all he needed was a browser and openoffice. Shoot, with a live cd you can even make computer use foolproof since it's impossible for them to permanently break it.
When my dad (83) was looking to replace his ancient Win7 Dell PC I convinced him to buy a MacMini since he's had an iPad for a long time, and more recently an iPhone.
Initially he was concerned about the "new" interface after using Windows since 3.11 days, but within an hour he was happy doing his usual "basic tasks" (email, basic Excel, Word for letters, printing, etc). He was amazed both his printers (colour/scanner, b&w) worked with zero hassle after simply plugging them in.
Now he loves the ability to FaceTime anyone in the family (kids, grandkids, etc.) at the click of a button using the webcam plugged into the Mini, and really enjoys the sync of photos, emails, notes, etc.
I think he would have really struggled with Windows 11 so I was tempted by an older-person friendly Linux distro if macOS wasn't an option.
Apple is by far the best integrated ecosystem for nontechsavy users; its incredible how they follow this approach; iPhone is since V1 (?) without manual.
Im looking fearfully into the future: What will happen to Apple product perfectness if any of these MBA or even PE guys is taking over CEO role?
They usually work well with printers, but I've run into some situations where I was just plain unable to get it to work with my Brother laser printer after a certain ChromeOS update. They screwed up something with the CUPS drivers and it just never worked.
On MacOS at least I have a chance of being able to fix this stuff. ChromeOS is so locked down you can't even fix things.
I thought about mentioning my mom, since she's been my number 1 tech support client since ever... And I was going to say that, I am so certain of how solid this distro is, that I would even install it on my mom's laptop without any hesitation.
I recently had to deal with a ministry in Canada, where a worker who had been there since 20 years ago failed even a basic test of competence in reading comprehension. Then multiple issues with the OPC (Office of Privacy Commissioner) failing entirely on a basic issue.
Another example exists in Ontario's tenant laws constantly being criticized as enabling bad tenant behavior, but reading the statute full of many month delays for landlords and 2 day notices for tenants paints a more realistic picture.
In fact, one such landlord lied, admitted to lying, and then had their lie influence the decision in their favor, despite it being known to be false, by their own word. The appeal mentioned discretion of the adjudicator.
Not sure how long that can go on before a collapse, but I can't imagine it's very long.
I think it should be perfectly OK to make value judgements of other people, and if they are backed by evidence, make them publicly and make them have consequences for that person's position.
A recent review of one of Canada's Federal Institutions showed the correct advice was given 17% of the time[0]. 83% failure rate. Not a soul has been fired unless something changed recently.
I do agree however with your assessment because any (additional) accountability would improve matters.
This is a definition of spam, not the only definition of spam.
In Canada, which is relevant here, the legal definition of spam requires no bulk.
Any company sending an unsolicited email to a person (where permission doesn't exist) is spamming that person. Though it expands the definition further than this as well.
I think the point being made is that the graphs don't show real world applications progress. Being 99.9999999% or 0.000001% of the way to a useful application could be argued as no progress given the stated metric. Is there a guarantee that these things can and will work given enough time?
Quantum theory says that quantum computers are mathematically plausible. It doesn't say anything about whether it's possible to construct a quantum computer in the real world of a given configuration. It's entirely possible that there's a physical limit that makes useful quantum computers impossible to construct.
Quantum theory says that quantum computers are physically plausible. Quantum theory lies in the realm of physics, not mathematics. As a physical theory, it makes predictions about what is plausible in the real world. One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.
The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.
> One of those predictions is that it's possible to build a large-scale fault tolerant quantum computer.
Quantum theory doesn't predict that it's possible to build a large scale quantum computer. It merely says that a large scale quantum computer is consistent with theory.
Dyson spheres and space elevators are also consistent with quantum theory, but that doesn't mean that it's possible to build one.
Physical theories are
subtractive, something that is consistent with the lowest levels of theory can still be ruled out by higher levels.
Good point. I didn't sufficiently delineate what counts as a scientific problem and what counts as an engineering problem in QC.
Quantum theory, like all physical theories, makes predictions. In this case, quantum theory predicts that if the physical error rate of qubits is below a threshold, then error correction can be used to increase the quality of a logical at arbitrarily high levels. This prediction can be false. We currently don't know all of the potential noise sources that will prevent us from building a quantum logic gate that is of similar quality as a classical logic gate.
Building thousands of these logical qubits is an engineering problem similar to Dyson spheres and space elevators. You're right that the lower levels of building 1 really good logical qubit doesn't mean that we can build thousands of them.
If our case, even the lower-levels haven't been validated. This is what I meant when I implied that the project of building a large-scale QC might teach us something new about physics.
> The way to test out this theory is to try out an experiment to see if this is so. If this experiment fails, we'll have to figure out why theory predicted it but the experiment didn't deliver.
If "this experiment" is trying to build a machine, then failure doesn't give much evidence against the theory. Most machine-building failures are caused by insufficient hardware/engineering.
Quantum theory predicts this: https://en.wikipedia.org/wiki/Threshold_theorem. An experiment can show that this prediction is false. This is a scientific problem not an engineering one. Physical theories have to be verified with experiments. If the results of the experiment don't match what the theory predicts then you have to do things like re-examine data, revise the theory e.t.c.
But that theorem being true doesn't mean "they will work given enough time". That's my objection. If a setup is physically possible but sufficiently thorny to actually build, there's a good chance it won't be built ever.
In the specific spot I commented, I guess you were just talking about the physics part? But the GP was talking about both physics and physical realization, so I thought you were also talking about the combination too.
Yes we can probably test the quantum theory. But verifying the physics isn't what this comment chain is really about. It's about working machines. With enough reliable qubits to do useful work.
You're right. I didn't sufficiently separate experimental physics QC from engineering QC.
On the engineering end, the question on if a large-scale quantum computer can be built is leaning to be "yes" so far. DARPA QBI https://www.darpa.mil/research/programs/quantum-benchmarking... was made to answer this question and 11 teams have made it to Stage B. Of course, only people who believe DARPA will trust this evidence, but that's all I have to go on.
On the application front, the jury is still out for applications that are not related to simulation or cryptography: https://arxiv.org/abs/2511.09124
Publishing findings that amount to an admission that you and others spent a fortune studying a dead end is career suicide and guarantees your excommunication from the realm of study and polite society. If a popular theory is wrong, some unlucky martyr must first introduce incontrovertible proof and then humanity must wait for the entire generation of practitioners whose careers are built on it to die.
Quantum theory is so unlikely to be wrong that if large-scale fault tolerant quantum computers could not be built, the effort to try to build them will not be a dead end, but instead a revolution in physics.
> I prefer reading the LLM output for accessibility reasons.
And that's completely fine! If you prefer to read CVEs that way, nobody is going to stop you from piping all CVE descriptions you're interested in through a LLM.
However, having it processed by a LLM is essentially a one-way operation. If some people prefer the original and some others prefer the LLM output, the obvious move is to share the original with the world and have LLM-preferring readers do the processing on their end. That way everyone is happy with the format they get to read. Sounds like a win-win, no?
However, there will be cases where lacking the LLM output, there isn't any output at all.
Creating a stigma over technology which is easily observed as being, in some form, accessible is expected in the world we live. As it is on HN.
Not to say you are being any type of anything, I just don't believe anyone has given it all that much thought. I read the complaints and can't distinguish them from someone complaining that they need to make some space for a blind person using their accessibility tools.
> However, there will be cases where lacking the LLM output, there isn't any output at all.
Why would there be? You're using something to prompt the LLM, aren't you - what's stopping you from sharing the input?
The same logic can be applied in an even larger extent to foreign-language content. I'd 1000x rather have a "My english not good, this describe big LangChain bug, click <link> if want Google Translate" followed by a decent article written in someone's native Chinese, than a poorly-done machine translation output. At least that way I have the option of putting the source text in different translation engines, or perhaps asking a bilingual friend to clarify certain sections. If all you have is the English machine translation output, then you're stuck with that. Something was mistranslated? Good luck reverse engineering the wrong translation back to its original Chinese and then into its proper English equivalent! Anyone who has had the joy to deal with "English" datasheets for Chinese-made chips knows how well this works in practice.
You are definitely bringing up a good point concerning accessibility - but I fear using LLMs for this provides fake accessibility. Just because it results in well-formed sentences doesn't mean you are actually getting something comprehensible out of it! LLMs simply aren't good enough yet to rely on them not losing critical information and not introducing additional nonsense. Until they have reached that point, their user should always verify its output for accuracy - which on the author side means they were - by definition - also able to write it on their own, modulo some irrelevant formatting fluff. If you still want to use it for accessibility, do so on the reader side and make it fully optional: that way the reader is knowingly and willingly accepting its flaws.
The stigma on LLM-generated content exists for a reason: people are getting tired of starting to invest time into reading some article, only for it to become clear halfway through that it is completely meaningless drivel. If >99% of LLM-generated content I come across is an utter waste of my time, why should I give this one the benefit of the doubt? Content written in horribly-broken English at least shows that there is an actual human writer investing time and effort into trying to communicate, instead of it being yet another instance of fully-automated LLM-generated slop trying to DDoS our eyeballs.
I completely agree I prefer the original language as it offers more choice in how to try to consume it. I believe search engines segment content by source language though, so you would probably not ever see such content in search results for English language queries. It would be cool if you could somehow signal to search engines that you are interested in non-native language results. I don’t even tend to see results in the second language in my accept languages header unless the query is in that language.
Im sorry but I don't buy the argument that we should be accepting of AI slop because it's more accessible. That type of framing is devious because you frame dissenters as not caring about accessibility. It has nothing to do with accessibility and everything to do with simply not wanting to consume utterly worthless slop.
People generally don't actually care about accessibility and it shows, everywhere. There is obvious and glaring accessibility gains from LLMs that are entirely lost with the stigma.
Because authors do two things typically when they use an LLM for editing:
- iterate multiple rounds
- approve the final edit as their message
I can’t do either of those things myself — and your post implicitly assumes there’s underlying content prior to the LLM process; but it’s likely to be iterated interactions with an LLM that produces content at all — ie, there never exists a human-written rough draft or single prompt for you to read, either.
So your example is a lose-lose-lose: there never was a non-LLM text for you to read; I have no way to recreate the author’s ideas; and the author has been shamed into not publishing because it doesn’t match your aesthetics.
Your post is a classic example of demanding everyone lose out because something isn’t to your taste.
Unfortunately, the sheer amount of ChatGPT-processed texts being linked has for me become a reason not to want to read them, which is quite depressing.
Publix in the southeast US will give you anything that rings up wrong for free. I shopped there for 20+ years and only remember getting a handful of things free.
I'm in Toronto and I've never had anything ring up incorrectly at Wal-Mart. I can't recall ever having anything ring up incorrectly anywhere else, either. There have maybe been a couple of times where a sale price didn't apply to all the SKUs I thought it did.
reply