Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.
At least OpenAI is trying to do something about it.
Are you sure ChatGPT is the solution? It just sounds like another "savior complex" sell spin from tech.
1. Social media -> connection
2. AGI -> erotica
3. Suicide -> prevention
All these for engagement (i.e. addiction). It seems like the tech industry is the root cause itself trying to masquerade the problem by brainwashing the population.
Whether solution or not, fact is AI* is the most available entity for anyone who has sensitive issues they'd like to share. It's (relatively) cheap, doesn't judge, is always there when wanted/needed and can continue a conversation exactly where left off at any point.
* LLM would of course be technically more correct, but that term doesn't appeal to people seeking some level of intelligent interaction.
I personally take no opinion about whether or not they can actually solve anything, because I am not a psychologist and have absolutely no idea how good or bad ChatGPT is at this sort of thing, but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
> but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
Sure! let's take a look at OpenAI's executive staff to see how equipped they are to take a morally different approach than Meta.
Fidji Simo - CEO of Applications (formerly Head of Facebook at Meta)
Vijaye Raji - CTO of Applications (formerly VP of Entertainment at Meta)
Srinivas Narayanan - CTO of B2B Applications (formerly VP of Engineering at Meta)
Kate Rouch - Chief Marketing Officer (formerly VP of Brand and Product Marketing at Meta)
Irina Kofman - Head of Strategic Initiatives (formerly Senior Director of Product Management for Generative AI at Meta)
Becky Waite - Head of Strategy/Operations (formerly Strategic Response at Meta)
David Sasaki - VP of Analytics and Insights (formerly VP of Data Science for Advertising at Meta)
Ashley Alexander - VP of Health Products (formerly Co-Head of Instagram Product at Meta)
Ryan Beiermeister - Director of Product Policy (formerly Director of Product, Social Impact at Meta)
When given the right prompts, LLMs can be very effective at therapy. Certainly my wife gets a lot of mileage out of having ChatGPT help her reframe things in a better way. However "the right prompts" are not the ones that most mentally ill people would choose for themselves. And it is very easy for ChatGPT to become part of a person's delusion spiral, rather then be a helpful part of trying to solve it.
Is it better or worse than alternatives? Where else would a suicidal person turn, a forum with other suicidal people? Dry Wikipedia stats on suicide? Perhaps friends? Knowing how ChatGPT replies to me, I’d have a lot of trouble getting negativity influenced by it, any more than by yellow pages. Yeah, it used to try more to be your friend but GPT5 seems pretty neutral and distant.
I think that you will find a lot of strong opinions, and not a lot of hard data. Certainly any approach can work out poorly. For example antidepressants come with warnings about suicide risk. The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.
I know that many teens turn to social media. My strong opinions against that show up in other comments...
Case studies support this. Which is a fancy way to say, "We carefully documented anecdotal reports and saw what looks like a pattern."
There is also a strong parallel to manic depression. Manic depressives have a high suicide risk, and it usually happens when they are coming out of depression. With akathisia (fancy way to say inner restlessness) being the leading indicator. The same pattern is seen with antidepressants. The patient gets treatment, develops akathisia, then attempts suicide.
But, as with many things to do with mental health, we don't really know what is going on inside of people. While also knowing that their self-reports are, shall we say, creatively misleading. So it is easy to have beliefs about what is going on. And rather harder to verify them.
The article links to the case of Adam Raine, a depressed teenager who confided in ChatGPT for months and committed suicide. The parents blame ChatGPT. Some of the quotes definitely sound like encouraging suicide to me. It’s tough to evaluate the counterfactual though. Article with more detail: https://www.npr.org/sections/shots-health-news/2025/09/19/nx...
You know, usually it’s positive claims which are supposed to be substantiated, such as the claim that “LLMs can be good at therapy”. Holy shit, this thread is insane.
You don't seem to understand how burden of proof works.
My claim that LLMS can do effective therapeutic things is a positive claim. My report of my wife's experience is evidence. My example of something it has done for her is something that other people, who have experienced LLMs, can sanity check and decide whether they think this is possible.
You responded by saying that it is categorically impossible for this to be true. Statements of impossibility are *ALSO* positive claims. You have provided no evidence for your claim. You have failed to meet the burden of proof for your position. (You have also failed to clarify exactly what you consider impossible - I suspect that you are responding to something other than what I actually said.)
This is doubly true given the documented effectiveness of tools like https://www.rosebud.app/. Does it have very significant limitations? Yes. But does it deliver an experience that helps a lot of people's mental health? Also, yes. In fact that app is recommended by many therapists as a complement to therapy.
But is it a replacement for therapy? Absolutely not! As they themselves point out in https://www.rosebud.app/care, LLMs consistently miss important things that a human therapist should be expected to catch. With the right prompts, LLMs are good at helping people learn and internalize positive mental health skills. But that kind of use case only covers some of the things that therapists do for you.
So LLMs can and do to effective therapeutic things when prompted correctly. But they are not a replacement for therapy. And, of course, an unprompted LLM is unlikely to on its own do the potentially helpful things that it could.
No, it is evidence. It is evidence that can be questioned and debated, but it is still evidence.
Second, you misrepresent. The therapists that I have heard recommend Rosebud were not paid to do so. They were doing so because they had seen it be helpful.
Furthermore you have still not clarified what it is you think is impossible, or provided evidence that it is impossible. Claims of impossibility are positive assertions, and require evidence.
> Unless, of course, you count the AI algorithms that TikTok uses to drive engagement, which in turn can cause social contagion...
I have noticed that TikTok can detect a depressive episode within ~a day of it starting (for me), as it always starts sending me way more self harm related content
Are you quite certain the depressive episode developed organically and Tiktok reacted to it? Maybe the algorithm started subtly on that path two days before you noticed the episode and you only realize once it starts showing self-harm content?
Hmm, that's quite possible (and concerning to think about)
It had been showing me depressive content for days / weeks beforehand, during the start of the episode, however the sh content only started (Or I only noticed it) a few hours after I had a relapse, so the timing was rather uncanny
I don't think "doing something about it" equals to "being a solution". Tackling the problems of the homeless, people operate a lot of food banks. Those don't even begin to solve homelessness, yet it's a precious resource, so, "doing something".
ChatGPT/Claude can be absolutely brilliant in supportive, every day therapy, in my experience. BUT there are few caveats: I'm in therapy for a long time already (500+ hours), I don't trust it with important judgements or advice that goes counter to what I or my therapists think, and I also give Claude access to my diary with MCP, which makes it much better at figuring the context of what I'm talking about.
Also, please keep in mind "supportive, every day". It's talking through stuff that I already know about, not seeking some new insights and revelations. Just shooting the shit with an entity which is booted with well defined ideas from you, your real human therapist and can give you very predictable, just common sense reactions that can still help when it's 2am and you have nobody to talk to, and all of your friends have already heard this exact talk about these exact problems 10 times already.
I don’t use it for therapy, but my notes and journal are all just Logseq markdown. I’ve got a claude code instance running on my NAS with full two way access to my notes. It can read everything and can add new entries and tasks for me.
~11% of the US population is on antidepressants. I'm not, but I personally know the biggest detriment to my mental health is just how infrequently I'm in social situations. I see my friends perhaps once every few months. We almost all have kids. I'm perfectly willing and able to set aside more time than that to hang out, but my kids are both very young still and we're aren't drowning in sports/activities yet (hopefully never...). For the rest it's like pulling teeth to get them to do anything, especially anything sent via group message. It's incredibly rare we even play a game online.
Anyways, I doubt I'm alone. I certainly know my wife laments the fact she rarely gets to hang out with her friends too, but she at least has one that she walks with once a week.
Small kids do this to everybody. The only solution - if you have good family nearby, use them as parenting services from time to time, to get me-time, couple-time and social time with friends. Buy them gift or vacation as return. Its incredibly damaging to marriage which literally transforms overnight from this rosy great easy-to-manage relationship into almost daily hardship, stress and nerves. Alternative is a (good) nanny.
People have issues admitting it even when its visible for everybody around, like some sort of admission you are failing as a parent, partner, human being and whatnot. Nope, we are just humans with limited energy and even good kids can siphon it well beyond 100% continuously, that's all.
Now I am not saying be a bad parent, in contrary, to reach you maximum even as a parent and partner, you need to be in good shape mentally, not running on fumes continuously.
Life without kids is really akin to playing game of life on easiest settings. Much less rewarding at the end, but man that freedom and simplicity... you appreciate it way more once you lose it. The way kids can easily make any parent very angry is simply not experienced elsewhere in adult life... I saw this many times on otherwise very chill people and also myself & my wife. You just can't ever get close to such fury and frustration dealing with other adults.
We have parents nearby and they go there sometimes, but our oldest is quite the handful so we feel bad about doing it too much. She's just...very active, and always wants to play with someone. Always. At least kid #2 is easy. Regardless, it doesn't make it any easier to set up time with friends, just each other, and half of the time we do it just so we can get stuff done around the house.
You're right about the marriage stress. I've definitely seen the light at the end of the tunnel with friends/family that are further along in their kid's ages, though to be fair they haven't really hit peak teenage years either. At least there seems to be something of a lull.
I mean, sure, that definitely makes it easier. But it didn't used to be this way when people had kids, either. People used to get together a lot more frequently.
I'm surprised it's that low to be honest. By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism. The subset that would consider suicide is a small slice of that.
Would be more meaningful to look at the % of people with suicidal ideation.
> By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism.
Depression, schizophrenia, and mild autism (which by their accounting probably also includes ADHD) should NOT be thrown together into the same bucket. These are wholly different things, with entirely different experiences, treatments, and management techniques.
At that level it in part depends on your point of view: There's a general requirement in the DSM for a disorder to be something that is causing distress to the patient or those around them, or an inability to function normally in society. So someone with the same symptoms could fall under those criteria or not depending on their outlook and life situation.
> Mild/high-functional autism, as far as I understand it, is not even an illness but a variant of normalcy. Just different.
As someone who actually has an ASD diagnosis, and also has kids with that diagnosis too, this kind of talk irritates me…
If someone has a clinical diagnosis of ASD, they have a psychiatric diagnosis per the DSM/ICD. If you meet the criteria of the “Diagnostic and Statistical Manual of Mental Disorders”, surely by that definition you have a “mental disorder”… if you meet the criteria of the “International Classification of Diseases”, surely by that definition you have a “disease”
Is that an “illness”? Well, I live in the state of NSW, Australia, and our jurisdiction has a legal definition of “mental illness” (Mental Health Act 2007 section 4):
"mental illness" means a condition that seriously impairs, either temporarily or permanently, the mental functioning of a person and is characterised by the presence in the person of any one or more of the following symptoms--
(a) delusions,
(b) hallucinations,
(c) serious disorder of thought form,
(d) a severe disturbance of mood,
(e) sustained or repeated irrational behaviour indicating the presence of any one or more of the symptoms referred to in paragraphs (a)-(d).
So by that definition most people with a mild or moderate “mental illness” don’t actually have a “mental illness” at all. But I guess this is my point-this isn’t a question of facts, just of how you choose to define words.
Your comment wasn’t wrong. Neither is the reply wrong to be frustrated about how the world understands this complex topic.
You’re talking about autism. The reply is about autism spectrum DISORDER.
Different things, exacerbated by the imprecise and evolving language we use to describe current understanding.
An individual can absolutely exhibit autistic traits, whilst also not meeting the diagnostic criteria for the disorder.
And autistic traits are absolutely a variant of normalcy. When you combine many together, and it affects you in a strongly negative way, now you meet ASD criteria.
I think a very useful term here is Broad Autism Phenotype (BAP) - subclinical ASD - you have significantly more of the traits than the average person does, but they are not strong enough or not disabling enough to merit a clinical diagnosis of ASD.
BAP is very common among (1) STEM professionals, (2) close blood relatives of people with clinical ASD (if you have a child or sibling with an ASD diagnosis, then if you yourself don’t have ASD, odds are high you have some degree of BAP), (3) people with other psychiatric diagnoses (especially those known to have a lot of overlap with ASD, e.g. ADHD, personality disorders, PTSD, OCD, eating disorders, the schizophrenia spectrum), (4) certain LGBT subgroups (especially transgender people) - all of whom have heightened odds not just of having BAP / subclinical ASD, but clinical ASD too
Like ASD, BAP skews male, but women can have it too. (The average man is a little bit more autistic than the average woman.) Also, autistic traits are positively correlated between romantic partners, so a woman in a relationship with a man with BAP or ASD is more likely to have some degree of BAP herself (as well as being more likely to have clinical ASD)
BAP itself is a matter of degree… autistic traits is a continuum and we are all somewhere on it (actually a one-dimensional continuum is a simplification, it is a multidimensional construct-but a useful simplification) - and clinicians draw a line at some point (they don’t all draw it at the same place, and its location varies across time and space and culture and even clinical subcultures) and if you are on one side of that line you have clinical ASD, if you are on the other you don’t-if you are on the non-clinical side of the line, but nearing it, you have BAP… but “nearing” it subdivides into people who are closer and people who are further away
It’s okay… sorry I wasn’t aiming at you personally. A lot of the common language on this topic irritates me, but people can’t be blamed for repeating what they hear others say… I think we call that phenomenon “culture”
This stat is for AMI, for any mental disorder ranging from mild to severe. Anyone self-reporting a bout of anxiety or mild depression qualifies as a data point for mental illness. For suicide ideation the SMI stat is more representative.
There are 800 million weekly active users on ChatGPT. 1/800 users mentioning suicide is a surprisingly low number, if anything.
But they may well be overreporting suicidal ideation...
I was asking a silly question about the toxicity of eating a pellet of Uranium, and ChatGPT responded with "... you don't have to go through this alone. You can find supportive resources here[link]"
My question had nothing to do with suicide, but ChatGPT assumed it did!
We don't know how that search was done. For example, "I don't feel my life is worth living." Is that potential suicidal intent?
Also these numbers are small enough that they can easily be driven by small groups interacting with ChatGPT in unexpected ways. For example if the song "Everything I Wanted" by Billie Eilish (2019) went viral in some group, the lyrics could easily show up in a search for suicidal ideation.
That said, I don't find the figure at all surprising. As has been pointed out, an estimated 5.3% of Americans report having struggled with suicidal ideation in the last 12 months. People who struggle with suicidal ideation, don't just go there once - it tends to be a recurring mental loop that hits over and over again for extended periods. So I would expect the percentage who struggled in a given week to be a large multiple of the simplistic 5.3% divided by 52 weeks.
In that light this statistic has to be a severe underestimate of actual prevalence. It says more about how much people open up to ChatGPT, than it does to how many are suicidal.
(Disclaimer. My views are influenced by personal experience. In the last week, my daughter has struggled with suicidal ideation. And has scars on her arm to show how she went to self-harm to try to hold the thoughts at bay. I try to remain neutral and grounded, but this is a topic that I have strong feelings about.)
>Most people don't understand just how mentally unwell the US population is
The US is no exception here though. One in five people having some form of mental illness (defined in the broadest possible sense in that paper) is no more shocking than observing that one in five people have a physical illness.
With more data becoming available through interfaces like this it's just going to become more obvious and the taboos are going to go away. The mind's no more magical or less prone to disease than the body.
It sounds like you’re feeling down. Why don’t you pop a couple Xanax(tm) and shop on Amazon for a while, that always makes you feel better. Would you like me to add some Xanax(tm) to your shopping cart to help you get started?
Set an alarm on your phone for when you should take your meds. Snooze if you must, but don't turn off /accept the alarm until you take them.
Put daily meds in cheap plastic pillbox container labelled Sunday-Saturday (which you refill weekly). The box will help you notice if you skipped a day or can't remember if you took them or not today. Seeing pills not taken from past days also serves to alert you if/that your "remember-to-take-them" system is broken and you need to make conscious adjustmemts to it.
Sure but your therapist is also monetizing your pain for his own gain. Either A.I therapy works (e.g can provide good mental relief) or it doesn't. I tend to think it's gonna be amazing at those things talking from experience (very rough week with my mom's health deterioriating fast, did a couple of sessions with Gemini that felt like I'm talking to a therapist). Perhaps it won't work well for hard issues like real mental disorders but guess what human therapists are very often also not great at treating people with serious issue.
I agree, in severe mental disorders A.I is probably not enough (and afaik it acknowledges this to the user right away) but it might be able to help. If we accept/believe that the act of talking about your problems and having them reframed back to you empathetically helps , I don't see why we can't accept LLMs can help here. I don't buy that only a human can do that.
But one is a company ran by sociopaths that have no empathy and couldn't care less about anything but money, while the other is a human that at least studied the field all their life.
> But one is a company ran by sociopaths that have no empathy and couldn't care less about anything but money, while the other is a human that at least studied the field all their life.
Unpacking your argument you make two points:
1) The human has studied all his life; yes, some humans study and work hard. I have also studied programming half my life and it doesn't mean A.I can't make serious contributions in programming and that A.I won't keep improving.
2) These companies, or OpenAI in particular, are untrustworthy many grabbing assholes. To this I say if they truly care about money they will try to do a good job, e.g provide an A.I that is reliable, empathetic and that actually help you get on with life. If they won't - a competitor will. That's basically the idea of capitalism and it usually works.
I am one of these people (mentally ill - bipolar 1). I’ve seen others others via hospitalization that i would simply refuse to let them use ChatGPT because it is so sycophantic and would happily encourage delusions and paranoid thinking given the right prompts.
> At least OpenAI is trying to do something about it.
In this instance it’s a bit like saying “at least Tesla is working on the issue” after deploying a dangerous self driving vehicle to thousands.
edit: Hopefully I don't come across as overly anti-llm here. I use them on a daily basis and I truly hope there's a way to make them safe for mentally ill people. But history says otherwise (facebook/insta/tiktok/etc.)
Yep, it's just a question of whether on average the "new thing" is more good than bad. Pretty much every "new thing" has some kind of bad side effect for some people, while being good for other people.
I would argue that both Tesla self driving (on the highway only), and ChatGPT (for professional use by healthy people) has been more good than bad.
I thought it would be limited when the first truly awful thing inspired by an LLM happened, but we’ve already seen quite a bit of that… I am not sure what it will take.
https://www.nimh.nih.gov/health/statistics/mental-illness
Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.
At least OpenAI is trying to do something about it.