Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s a bunch to explore on this but im thinking this is a good entry point. NYT instead of OpenAI docs or blogs because it’s a 3rd party, and NYT was early on substantively exploring this, culminating in this article.

Regardless the engagement thing is dark and hangs over everything, the conclusion of the article made me :/ re: this (tl;dr this surprised them, they worked to mitigate, but business as usual wins, to wit, they declared a “code red” re: ChatGPT usage nearly directly after finally getting an improved model out that they worked hard on)

https://www.nytimes.com/2025/11/23/technology/openai-chatgpt...

Some pull quotes:

“ Experts agree that the new model, GPT-5, is safer. In October, Common Sense Media and a team of psychiatrists at Stanford compared it to the 4o model it replaced. GPT-5 was better at detecting mental health issues, said Dr. Nina Vasan, the director of the Stanford lab that worked on the study. She said it gave advice targeted to a given condition, like depression or an eating disorder, rather than a generic recommendation to call a crisis hotline.

“It went a level deeper to actually give specific recommendations to the user based on the specific symptoms that they were showing,” she said. “They were just truly beautifully done.”

The only problem, Dr. Vasan said, was that the chatbot could not pick up harmful patterns over a longer conversation, with many exchanges.”

“[An] M.I.T. lab that did [a] earlier study with OpenAI also found that the new model was significantly improved during conversations mimicking mental health crises. One area where it still faltered, however, was in how it responded to feelings of addiction to chatbots.”





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: