Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>This is a problem with LLMs

It's a problem with people using LLMs for something they're not supposed to be used for. If you want to read up on history grab some books from reputable authors, don't go to a generative AI model that by its very design can't distinguish truth from fiction.



Yes and no.

Yes, it is partially a problem with improper use. But as a practical matter, we know that convenience and confidence are powerful pulls to very large portions of the population. At some point, you have to treat human nature (or at least, human nature as manifested in the world we currently have) as a given, and consider things in light of that fixed background - not in light of the background of humanity you wish we had. If we lived in a world where everyone, or even where most people, behaved reasonably, we'd do a lot of things differently.

Previous propaganda efforts also didn't automatically construct a roughly-self-consistent worldview on demand for whatever false information you felt like feeding into them, either. So I do think LLMs are a powerful tool for that, for roughly the same reason they're a powerful tool in other contexts.


"Previous propaganda efforts also didn't automatically construct a roughly-self-consistent worldview on demand for whatever false information you felt like feeding into them"

Religion


>If we lived in a world where everyone, or even where most people, behaved reasonably

If we're not living in a world where most people behave reasonably then the Chinese got it right and censored LLMs and kids scissors it is. I do have a pretty naturalistic view on this, in the sense that you always get the LLM you deserve. You can either do your own thinking or you'll have someone else do it for you, but you can't hold the position that we're all sheeple and deserve to be free-thinkers at the same time.

So it's always a skill issue, you can only start to critically think yourself, enlightenment is as the quote goes freeing yourself from your own self induced tutelage.


The fact that the background reality is annoying to your preferred systems doesn't make it not true, though. "Doing your own research" is practically a cliche at this point, and it doesn't mean anything good.

The fact is that even highly intelligent people are not smart enough to avoid deliberate disinformation efforts by actors with a thousand times their resources. Not reliably. You might avoid 90% of them, but if there's a hundred such efforts on at a time, you're still gonna end up being super wrong about ten things. You detect the Nigerian prince phone call, but you don't detect the CFO deepfake on your Zoom call, that kind of thing.

When you say it's a "skill issue", I think you're basically expecting a skill bar that is beyond human capability. It's like saying the fact that you can get shot is a "skill issue" because in principle you could dodge every bullet like you're in the Matrix - yeah, but you're not actually able to do that!

> but you can't hold the position that we're all sheeple and deserve to be free-thinkers at the same time.

I don't. I believe it's mostly the first one. I don't know what other conclusion I can possibly take from everything that has happened in the history of the internet - including having fallen rather badly for disinformation myself a couple of times in the past.

You should be a freethinker when it comes to areas where you have unique expertise: your specific vocation or field of study, your unique exposure to certain things (say, small subgroups you happen to be in the intersection of), and your own direct life experiences (do you feel good today? are the people you know struggling?). Everywhere else, you should bet on institutions that have otherwise proved to earn your trust (by generally matching your expectations within the areas where you do have expertise or making observably correct past predictions).


Paraphrasing this great quote I got from a vsauce video:

"A technology is neither evil nor good, it is a key which unlocks 2 doors. One leads to heaven, and one to hell. It's up to the humans to decide which one they pick."


Unfortunately, there's no disclaimer saying that and more and more people will go down this route.


Scary too thinking not needing to go to school anymore when you can just ask your device what to do/think.


This is exactly why millions of Americans choose home schooling. So that their children don't get confronted with science and philosophy.


This is not the place to discuss this (wrt religion) but I am very much for science/philosophy.

I guess to further explain my point above: the current/past way to learn math is to start from the basics, addition, decimals, fractions, etc... vs a future where you don't even know how to do that, you just ask.

Which some things are naturally like that eg. write with your hand/pencil less than typing/talking.

Idk... it's like coding with/without co-pilot. New programmers now with that assist/default.

edit: I also want to point out, despite how tin-foil hat I am about something like Neuralink, I think it would be interesting if in the future humans were born with one/implanted at birth and it (say a symbiote AI) grew with them.


I'd think it's more likely that people choose homeschooling because of the lack of philosophy in mainstream curriculum.


I agree.

This is not an LLM problem.

This is a people using LLMs when they should use authoritative resources problem.

If an LLM were to tell you that your slab's rebar layout should match a certain configuration and you believe it, well, don't be surprised when the cranks are all in the wrong places and your cantilevers collapse.

The idea that anyone would use an LLM to determine something as important as a building's specifications seems like patent lunacy. It's the same for any other endeavor where accuracy is valued.


Accuracy is not knowably possible in some domains though, which should be noted because it is a very big problem.


> It's a problem with people using LLMs for something they're not supposed to be used for.

To me the problem is that there's absolutely no way to know what an LLM is or is not "supposed" to be used for.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: