Hacker Newsnew | past | comments | ask | show | jobs | submit | 93po's commentslogin

something something bootstraps

Food should only be for sustenance, not emotional support. We should only sell brown rice and beans, no more Oreos.

Oreos won't affirm your belief that suicide is the correct answer to your life problems, though.

That is mostly a dogmatic question, rooted in (western) culture, though. And even we have started to - begrudgingly - accept that there are cases where suicide is the correct answer to your life problems (usually as of now restricted to severe, terminal illness).

The point the OP is making is that LLMs are not reliably able to provide safe and effective emotional support as has been outlined by recent cases. We're in uncharted territory and before LLMs become emotional companions for people, we should better understand what the risks and tradeoffs are.

I wonder if statistically (hand waving here, I’m so not an expert in this field) the SOTA models do as much or as little harm as their human counterparts in terms of providing safe and effective emotional support. Totally agree we should better understand the risks and trade offs but I wouldn’t be super surprised if they are statistically no worse than us meat bags this kind of stuff.

One difference is that if it were found that a psychiatrist or other professional had encouraged a patient's delusions or suicidal tendencies, then that person would likely lose his/her license and potentially face criminal penalties.

We know that humans should be able to consider the consequences of their actions and thus we hold them accountable (generally).

I'd be surprised if comparisons in the self-driving space have not been made: if waymo is better than the average driver, but still gets into an accident, who should be held accountable?

Though we also know that with big corporations, even clear negligence that leads to mass casualties does not often result in criminal penalties (e.g., Boeing).


> that person would likely lose his/her license and potentially face criminal penalties.

What if it were an unlicensed human encouraging someone else's delusions? I would think that's the real basis of comparison, because these LLMs are clearly not licensed therapists, and we can see from the real world how entire flat earth communities have formed from reinforcing each others' delusions.

Automation makes things easier and more efficient, and that includes making it easier and more efficient for people to dig their own rabbit holes. I don't see why LLM providers are to blame for someone's lack of epistemological hygiene.

Also, there are a lot of people who are lonely and for whatever reasons cannot get their social or emotional needs met in this modern age. Paying for an expensive psychiatrist isn't going to give them the friendship sensations they're craving. If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?

> if waymo is better than the average driver, but still gets into an accident, who should be held accountable?

Waymo of course -- but Waymo also shouldn't be financially punished any harder than humans would be for equivalent honest mistakes. If Waymo truly is much safer than the average driver (which it certainly appears to be), then the amortized costs of its at-fault payouts should be way lower than the auto insurance costs of hiring out an equivalent number of human Uber drivers.


> I would think that's the real basis of comparison

It's not because that's not the typical case. LLMs encourage people's delusions by default, it's just a question of how receptive you are to them. Anyone who's used ChatGPT has experienced it even if they didn't realize it. It starts with "that's a really thoughtful question that not many people think to ask", and "you're absolutely right [...]".

> If AI is better at meeting human needs than actual humans are, why let perfect be the enemy of good?

There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.

Talk to ChatGPT and try to put yourself into the shoes of a hurtful person (e.g. what people would call "narcissistic") who's complaining about other people. Keep in mind that they almost always suffer from a distorted perception so they genuinely believe that they're great people.

They can misunderstand some innocent action as a personal slight, react aggressively, and ChatGPT would tell them they were absolutely right to get angry. They could do the most abusive things and as long as they genuinely believe that they're good people (as they almost always do), ChatGPT will reassure them that other people are the problem, not them.

It's hallucinations feeding into hallucinations.


> LLMs encourage people's delusions by default, it's just a question of how receptive you are to them

There are absolutely plenty of people who encourage others' flat earth delusions by default, it's just a question of how receptive you are to them.

> There is no good that comes from having all of your perspective distortions validated as facts. They turn into outright delusions without external grounding.

Again, that sounds like a people problem. Dictators infamously fall into this trap too.

Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike. If others are okay with having their egos stroked and their delusions encouraged and validated, that's their prerogative.


> If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.

It's not a matter of liking or disliking something. It's a question of whether that thing is going to heal or destroy your psyche over time.

You're talking about personal responsibility while we're talking about public policy. If people are using LLMs as a substitute for their closest friends and therapist, will that help or hurt them? We need to know whether we should be strongly discouraging it before it becomes another public health disaster.


> We need to know whether we should be strongly discouraging it before it becomes another public health disaster.

That's fair! However, I think PSAs on the dangers of AI usage are very different in reach and scope from legally making LLM providers responsible for the AI usage of their users, which is what I understood jsrozner to be saying.


>Why are we holding LLMs to a higher standard than humans? If you don't like an LLM, then don't interact with it, just as you wouldn't interact with a human you dislike.

We're not holding LLMs to a higher standard than humans, we're holding them to a different standard than humans because - and it's getting exhausting having to keep pointing this out - LLMs are not humans. They're software.

And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.

And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem. We tend not to accept that kind of behavior in other people, because we understand the very real negative consequences of mass delusion and sociopathy. Why should we accept it from software?


> LLMs are not humans. They're software.

Sure, but the specific context of this conversation are the human roles (taxi driver, friend, etc.) that this software is replacing. Ergo, when judging software as a human replacement, it should be compared to how well humans fill those traditionally human roles.

> And we don't have a choice not to interact with LLMs because apparently we decided that these things are going to be integrated into every aspect of our lives whether we like it or not.

Fair point.

> And yes, in that inevitable future the fact that every piece of technology is a sociopathic P-zombie designed to hack people's brain stems and manipulate their emotions and reasoning in the most primal way possible is a problem.

Fair point again. Thanks for helping me gain a wider perspective.

However, I don't see it as inevitable that this becomes a serious large-scale problem. In my experience, current GPT 5.1 has already become a lot less cloyingly sycophantic than Claude is. If enough people hate sycophancy, it's quite possible that LLM providers are incentivized to continue improving on this front.

> We tend not to accept that kind of behavior in other people

Do we really? Maybe not third party bystanders reacting negatively to cult leaders, but the cult followers themselves certainly don't feel that way. If a person freely chooses to seek out and associate with another person, is anyone else supposed to be responsible for their adult decisions?


They also are not reliably able to provide safe and effective productivity support.

yep: seedbox + this + jeyllfin = unlimited streaming of anything i want for $15/month. no ads, no terrible app UIs, no autoplaying previews, no content getting removed, not having 4 different streaming subscriptions that are hard to cancel, no fuckery at all.

I set this up for my husband who barely knows which end of a PC to use. He's filled up the 4TB media array and now I need more disks (or better retention policies)

It means it gets a lot of use, so you can justify larger disks! Isn't that great?

4tb is not a lot, but I've realised if its for personal use you could just delete things you've already watched.

particularly tv shows, unless you really really want to keep them.

its not necessary to delete movies, IMO, as a 4k HDR 5.1 movie will go for about 30gbs. Not that much.


Delete things? Never!

it doesnt feel nice to try to dismiss someone's interests and hobbies as "they're autistic"


there are ways of having privacy preserving communication/web browsing that are designed differently than Tor. Freenet is example.


ublock with annoyance filters also solves this


this is factually not accurate, trump's actions are constantly getting legally challenged and blocked


Just not by SCOTUS (cf. the shadow docket which has not gone against Trump since May).


SCOTUS actually took some of that "constantly being legally challenged and blocked" away when they took away nation-wide injunctions. Even if they're issued by federal judges against nation-wide orders which were given by a nationally-powerful elected official.


> SCOTUS actually took some of that "constantly being legally challenged and blocked" away when they took away nation-wide injunctions.

Arguably, they added to it, since now nation-wide policies are instead being blocked locally by multiple district courts instead of just facing nationwide injunctions in the first place they are litigated.


it wouldn't work. they'd hire some minimum wage person to go to all of them and just read the terms and conditions you agreed to that include language about arbitration or whatever


Terms of service, written by a corporation, do not overrule the law, of a country.


Especially not when the plaintiff isn't even a user of the service.


How did they agree to those terms?


Probably includes something insane like "By allowing your website to be crawled by google spiders, you agree to the following terms...."


Ok, by not objecting withing 5 seconds you hereby agree to let me shoot you in the head.


This is a super tired, wrong talking point. Tesla was basically nothing when Elon bought it. He effectively just bought the name. It's also a tired talking point because even if there was some meaningfully well-developed product he was buying at the time, he still grew the company from basically non-existent to one of the best car manufacturers in the world, which is 99.9% of what matters.

If we're going to criticize people I wish we'd stick to real things to criticize, because there are plenty with Elon. Making stuff up like this just makes anti-Elon people look ridiculous.


government and media are controlled by same class of people: billionaires.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: