Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> that person would likely lose his/her license and potentially face criminal penalties.

What if it were an unlicensed human encouraging someone else's delusions? I would think that's the real basis of comparison, because these LLMs are clearly not licensed therapists, and we can see from the real world how entire flat earth communities have formed from reinforcing each others' delusions.

Automation makes things easier and more efficient, and that includes making it easier and more efficient for people to dig their own rabbit holes. I don't see why LLM providers are to blame for someone's lack of epistemological hygiene.

Also, there are a lot of people who are lonely and for whatever reasons cannot get their social or emotional needs met in this modern age. Paying for an expensive psychiatrist isn't going to give them the friendship sensations they're craving. If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?

> if waymo is better than the average driver, but still gets into an accident, who should be held accountable?

Waymo of course -- but Waymo also shouldn't be financially punished any harder than humans would be for equivalent honest mistakes. If Waymo truly is much safer than the average driver (which it certainly appears to be), then the amortized costs of its at-fault payouts should be way lower than the auto insurance costs of hiring out an equivalent number of human Uber drivers.





> I would think that's the real basis of comparison

It's not because that's not the typical case. LLMs encourage people's delusions by default, it's just a question of how receptive you are to them. Anyone who's used ChatGPT has experienced it even if they didn't realize it. It starts with "that's a really thoughtful question that not many people think to ask", and "you're absolutely right [...]".

> If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?

There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.

Talk to ChatGPT and try to put yourself into the shoes of a hurtful person (e.g. what people would call "narcissistic") who's complaining about other people. Keep in mind that they almost always suffer from a distorted perception so they genuinely believe that they're great people.

They can misunderstand some innocent action as a personal slight, react aggressively, and ChatGPT would tell them they were absolutely right to get angry. They could do the most abusive things and as long as they genuinely believe that they're good people (as they almost always do), ChatGPT will reassure them that other people are the problem, not them.

It's hallucinations feeding into hallucinations.


> LLMs encourage people's delusions by default, it's just a question of how receptive you are to them

There are absolutely plenty of people who encourage others' flat earth delusions by default, it's just a question of how receptive you are to them.

> There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.

Again, that sounds like a people problem. Dictators infamously fall into this trap too.

Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike. If others are okay with having their egos stroked and their delusions encouraged and validated, that's their prerogative.


> If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.

It's not a matter of liking or disliking something. It's a question of whether that thing is going to heal or destroy your psyche over time.

You're talking about personal responsibility while we're talking about public policy. If people are using LLMs as a substitute for their closest friends and therapist, will that help or hurt them? We need to know whether we should be strongly discouraging it before it becomes another public health disaster.


> We need to know whether we should be strongly discouraging it before it becomes another public health disaster.

That's fair! However, I think PSAs on the dangers of AI usage are very different in reach and scope from legally making LLM providers responsible for the AI usage of their users, which is what I understood jsrozner to be saying.


>Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.

We're not holding LLMs to a higher standard than humans, we're holding them to a different standard than humans because - and it's getting exhausting having to keep pointing this out - LLMs are not humans. They're software.

And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.

And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem. We tend not to accept that kind of behavior in other people, because we understand the very real negative consequences of mass delusion and sociopathy. Why should we accept it from software?


> LLMs are not humans. They're software.

Sure, but the specific context of this conversation are the human roles (taxi driver, friend, etc.) that this software is replacing. Ergo, when judging software as a human replacement, it should be compared to how well humans fill those traditionally human roles.

> And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.

Fair point.

> And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem.

Fair point again. Thanks for helping me gain a wider perspective.

However, I don't see it as inevitable that this becomes a serious large-scale problem. In my experience, current GPT 5.1 has already become a lot less cloyingly sycophantic than Claude is. If enough people hate sycophancy, it's quite possible that LLM providers are incentivized to continue improving on this front.

> We tend not to accept that kind of behavior in other people

Do we really? Maybe not third party bystanders reacting negatively to cult leaders, but the cult followers themselves certainly don't feel that way. If a person freely chooses to seek out and associate with another person, is anyone else supposed to be responsible for their adult decisions?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: