Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Self-help books help people (at least sometimes). In an ideal world an LLM could be like the ultimate self-help book, dispensing the advice and anecdotes you need in your current situation. It doesn't need to be human to be beneficial. And at least from first-order principles it's not at all obvious that they are more harmful than helpful. To me it appears that most of the harm is in the overly affirming sycophant personally most of them are trained into, which is not a necessary or even natural feature of LLMs at all

Not that the study wouldn't be valuable even if it was obvious



Self-help books are designed to sell, they’re not particularly useful on their own.

LLM’s are plagued by poor accuracy so they preform terribly in any situation where inaccuracies have serious downsides and there is no process validating the output. This is a theoretical limitation of the underlying technology, not something better training can fix.


I don't think that argument is solid enough. "serious downsides" doesn't always mean "perform terribly".

Most unfixable flaws can be worked around with enough effort and skill.


At scale it does when “serious downsides” are both common and actually serious like death.

Suppose every time you got into your car an LLM was going to recreate the all safety critical software from an identical prompt but using slightly randomized output. Would you feel comfortable with such an arrangement?

> Most unfixable flaws can be worked around with enough effort and skill.

Not when the underlying idea is flawed enough. You can’t get from the earth to the moon by training yourself to jump that distance, I don’t care who you’re asking to design your exercise routine.


> At scale it does when “serious downsides” are both common and actually serious like death.

Yeah but the argument about how it works today is completely different from the argument about "theoretical limitations of the underlying technology". The theory would be making it orders of magnitude less common.

> Not when the underlying idea is flawed enough. You can’t get from the earth to the moon by training yourself to jump that distance, I don’t care who you’re asking to design your exercise routine.

We're talking about poor accuracy aren't we? That doesn't fundamentally sabotage the plan. Accuracy can be improved, and the best we have (humans) have accuracy problems too.


> The theory would be making it orders of magnitude less common.

LLM’s can’t get 3+ orders of magnitude better here. There’s no vast untapped reserves of clean training data, and tossing more processing power quickly results in overfitting existing training data.

Eventually you need to use different algorithms.

> That doesn’t fundamentally sabotage the pan. Accuracy can be improved

Not nearly far enough to solve the issue.


OP never said that serious downsides = perform terribly, he said that in situations where the consequences are severe you don’t want to use LLMs.

> Most unfixable flaws can be worked around with enough effort and skill.

Such a ridiculous example of delusional LLM hype, comments like this are downright offensive to me.

“Your therapy bot is telling vulnerable people to kill themselves, they probably should have applied more skill and effort to being in therapy”


> Such a ridiculous example of delusional LLM hype, comments like this are downright offensive to me.

Sorry you got offended at a thing I didn't say.

That was a generic comment about designers/engineers/experts, not untrained users.

Also you swapping "can be" with "should" strongly changes the meaning all by itself. Very often you can force a design to work but you should not.


I have no idea how you think “ they probably could have” sounds any better, or how it makes your argument stronger at all. If we can apply AI to these situations but shouldn’t, why even bother with your first comments?


> I have no idea how you think “they probably could have” sounds any better, or how it makes your argument stronger at all.

When I talk about "can" I'm talking about in the medium future or further, not what anyone is using or developing right now. It's "can someday" not "could have".

> If we can apply AI to these situations but shouldn’t, why even bother with your first comments?

Because I dislike it when people conflate "this technology has flaws that make it hard to apply to x task" with "it is impossible for this category of technology to ever be useful at x task"

And to be clear, I'm not saying "should" but I'm not saying "shouldn't" either, when it comes to unknown future versions of LLM technology. I'll make that decision later. The point is that the range of "can" is much wider than the range of "should", so when someone says "can't" about all future versions of a technology they need extra strong evidence.


I’ve only ever read three self-help books but they were profoundly useless. All three could have been a two-page blog post of dubious advice. Never buying a self-help book again. If that’s what therapy LLMs are training on I hate the idea even more than I did before.


I have stopped using an incredibly benign bot that I wrote, even thought it was supremely useful - because it was eerily good at saying things that “felt” right.

Self help books do not contort to the reader. Self help books are laborious to create, and the author will always be expressing a world model. This guarantees that readers will find chapters and ideas that do not mesh with their thoughts.

LLMs are not static tools, and they will build off of the context they are provided, sycophancy or not.

If you are manic, and want to be reassured that you will be winning that lottery - the LLM will go ahead and do so. If you are hurting, and you ask for a stream of words to soothe you, you can find them in LLMs.

If someone is delusional, LLMs will (and have already) reinforced those delusions.

Mental health is a world where the average/median human understanding is bad, and even counter productive. LLMs are massive risks here.

They are 100% going to proliferate - for many people, getting something to soothe their heart and soul, is more than they already have in life. I can see swathes of people having better interactions with LLMs, than they do with people in their own lives.

quoting from the article:

> In an earlier study, researchers from King's College and Harvard Medical School interviewed 19 participants who used generative AI chatbots for mental health and found reports of high engagement and positive impacts, including improved relationships and healing from trauma.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: