I ask ChatGPT all kinds of questions that could be considered potentially problematic. For example, I frequently ask about my dog’s medications. When my dog had a reaction to one of them, I asked ChatGPT about the symptoms, which ultimately prompted me to take her to the emergency vet.
A couple of weeks ago, I also asked about the symptoms of sodium overdose. I had eaten ramen and then pho within about twelve hours and developed a headache. After answering my question, ChatGPT cleared the screen and displayed a popup urging me to seek help if I was considering harming myself.
What has been genuinely transformative for me is getting actual answers—not just boilerplate responses like “consult your vet” or “consider talking to a medical professional.”
This case is different, though. ChatGPT reinforced someone’s delusions. My concern is that OpenAI may overreact by broadly restricting the model’s ability to give its best, most informative responses.
A couple of weeks ago, I also asked about the symptoms of sodium overdose. I had eaten ramen and then pho within about twelve hours and developed a headache. After answering my question, ChatGPT cleared the screen and displayed a popup urging me to seek help if I was considering harming myself.
What has been genuinely transformative for me is getting actual answers—not just boilerplate responses like “consult your vet” or “consider talking to a medical professional.”
This case is different, though. ChatGPT reinforced someone’s delusions. My concern is that OpenAI may overreact by broadly restricting the model’s ability to give its best, most informative responses.