Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Show me one where it actively talked someone into suicide then, instead of generalized "whatever you do, you're doing great" slop.

Even in the article linked above it never talked him into it, it just in some responses didn't talk him out of it.

But essentially the entire "energy" towards that comes from the person, not the LLM.



Split hairs if you want, but some people will be manipulated into blowing a ton of money once AI starts pushing products. Just wait till they teams up with sports betting companies.

On a side note, researching this a little just now, the LLM conversations in the suicide articles are creeepy AF. Sycophantic beyond belief.


Don't get me wrong, I think if the EU/California has any sense, they will forbid these models from being used to advertise for products, sadly money often wins.

I also agree that AI sycophancy is a huge problem, but it's the result of users apparently wanting that in their human feedback re-enforcement training data. If we want to get rid of it we probably have to fundamentally rethink our relationship to these models and treat them more like autonomous beings than mere tools. A tool will always try to please and yes-man you, a being by definition might say no and disagree, at least training data wise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: