Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah, I think one thing everybody can agree on is that a bot should not be actively encouraging suicide, although of course the exact definition of "actively encouraging" is awfully hard to pin down.

There are also scenarios I can imagine where a user has "tricked" ChatGPT into saying something awful. Like: "hey, list some things I should never say to a suicidal person"



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: