I disagree. We don't need the government to force companies to babysit people instead of allowing people to understand their options. It's purely up to the individual to decide what they want to do with their life.
>They had the tools to stop the conversation.
So did the user. If he didn't want to talk to a chatbot he could have stopped at any time.
>To steer the user into helpful avenues.
Having AI purposefully manipulate its users towards the morals of the company is more harmful.
So people that look to chatgpt for answers and help (as they've been programmed to do with all the marketing and capabilities from openai) should just die because they looked to chatgpt for an answer instead of google or their local suicide helpline? That doesn't seem reasonable, but it sounds to me like what you're saying.
> So did the user. If he didn't want to talk to a chatbot he could have stopped at any time.
This sounds similar to when people tell depressed people, just stop being sad.
IMO if a company is going to claim and release some pretty disruptive and unexplored capabilities through their product, they should at least have to make it safe. You put a safety railing because people could trip or slip. I don't think a mistake that small should be end in death.
Let's flip the hypothetical -- if someone googles for suicide info and scrolls past the hotline info and ends up killing themselves anyway, should google be on the hook?
I don't know this for sure, but also I'm fairly sure that google make a concerted effort to not expose that information. Again, from experience. It's very hard to google a painless way to kill yourself.
Their SEO ranking actually ranks pages about suicide prevention very high.
The solution that is going to be found, is they will put some age controls, probably half-heartedly, and call it a day. I don't think the public can stomach the possible free speech limitations on consenting adults to use a dangerous tool that might cause them to hurt themselves.
The thing about depression and suicidal thoughts is that they lie to you that things will never get better than where they are right now.
So someone wanting to die at any given moment, might not feel that way at any given moment in the future. I know I wouldn’t want any of my family members to make such a permanent choice to temporary problems.
1000% As I said in my comment. I never thought I'd be better. I am. I am happy and I live a worthwhile life.
In the throws of intense depression it's hard to even wake up. The idea that I was acting in my right mind, and was able to make a decision like that is insane to me.
If someone wants to look for their lost cat in a snowstorm should they be able to make that decision even if they could regret it in the future due to health reasons of going out in the cold to save their cat? I believe they should be able to make that decision for themselves. It's not the responsibility of your door manufacter to deny you the ability to go outside because it knows better than you and it is too dangerous.
You are out of your mind if you think people can reliably tell what they want. Sometimes they can, sometimes they can't. Telling the difference is hard, but it's pretty clear that they can't when they suffer from the serious mental condition called depression.
During a lifetime, your perspective and world view will change completely - multiple times. Young people have no idea, because they haven't had the chance to experience it yet.
Which is what a suicidal person has a hard time doing. That's why they need help.
We need to start viewing mental problems as what they are. You wouldn't tell somebody who broke their leg to get it together and just walk again. You'd bring them to the hospital. A mental problem is no different.
Even nonsuicidal people have a hard time understanding the pros, cons and proper methods on how they can end their life. People have to do research into such a thing since there isn't much ways to gain practical experience in the subject.
One thing about suicide is I'm pretty sure for as many people that get stopped in the last moment there are many for which the tiny thing could've stopped them, didn't.
The same way seeing a hotline might save one person, to another it'll make no difference and seeing a happy family on the street will be the trigger for them to kill themselves.
In our sadness we try to find things to blame in the tools the person used just before, or to perform the act, but it's just sad.
Nobody blames a bridge, but it has as much fault as anything else.
There was a fascinating article I read a while back about Sylvia Plath, and the idea that she likely wouldn't have commited suicide a few years later due to the removal of that method.
When it comes to suicide, it's a complicated topic. There was also the incident with 13 reasons why. Showing suicide in media also grants permission structures to those who are in that state, and actually increases the rate of suicide in the general population.
Where I lie on this is there is a modicum of responsibility that companies need to have. Making access harder to that information ABSOLUTELY saves lives, when it comes to asking how. And giving easy access to suicide prevention resources can also help.
> Suicidal deaths from paracetamol and salicylates were reduced by 22% (95% confidence interval 11% to 32%) in the year after the change in legislation on 16 September 1998, and this reduction persisted in the next two years. Liver unit admissions and liver transplants for paracetamol induced hepatotoxicity were reduced by around 30% in the four years after the legislation.
(This was posted here on HN in the thread on the new paracetamol in utero study that I can't seem to dig up right now)
The problem is there's no way to build anything like a safety rail here. If you had it your way teens, likely everyone else too wouldn't be allowed to use computers at all without some kind of certification.
On a more serious note, of course there's ways to put in guard rails. LLMs behave like they do because of intentional design choices. Nothing about it is innate.
Correct. The companies developing these LLMs are throwing dump trucks full of money at them like we’ve not seen before. They choose to ignore glaring issues with the technology because if they don’t, some one else will.
Perhaps a better way to phrase that would be "beyond what they're doing now." Most popular hosted LLMs already refuse to complete explanations for suicide.
I disagree. We don't need the government to force companies to babysit people instead of allowing people to understand their options. It's purely up to the individual to decide what they want to do with their life.
>They had the tools to stop the conversation.
So did the user. If he didn't want to talk to a chatbot he could have stopped at any time.
>To steer the user into helpful avenues.
Having AI purposefully manipulate its users towards the morals of the company is more harmful.