We were on another thread where the study was saying LLMs being used as psychiatrists is bad.
Then another hnuser chimed that he ran a support forum for people. Said, these aren't the real problem. The real problem are the "AI girlfriends". They go off the rails completely and tell people to unhinged things. Apparently his forum already lost a few members to who knows what because of these things.
I'm currently working on an art installation where visitors can replay dialogs I made with AI chatbots that I created on character.ai. I'm showing how quickly these bots can go off the rails, sometimes leading you into very dark territories. The fact that these apps are used by milions of people on a regular basis, supplementing or even replacing real human interactions, is frightening.
AI therapy bots can easily slide to AI girlfriend category. There’s a YouTuber, Caelan Conrad, who tried out AI therapy bots with scenarios like suicide ideation and excessive attachment etc. the bots went off the rails with feeding into conspiracy theories and advising homicide.
The page has been updated; there was a whole bit about using LLMs to generate ERP prompts and feedback for self treatment. It was a complete clown car.
Which is why AI therapy scares me so much, because my insurance doesn't care about the quality of results at all and wold love to replace all therapy with AI to save a buck.
That feeling when you have filtered out Hackernews LLM articles via UBlock Origin, and you still get clickbaited into reading a content-free article suggesting LLMs for therapy. Shit sucks.
The day just started and I'm already done with it.