And you can be arrested for overstaying a visa. It used to be that after a process that does not include being swiped off the streets by masked thugs. Now we're doing it this way.
> Her husband, she said, had no prior history of mania, delusion, or psychosis. He'd turned to ChatGPT about 12 weeks ago for assistance with a permaculture and construction project
Suspicious of “no prior history.”
All the people I have ever known who were into things like “permaculture” were touched by a bit of insanity of the hippie variety.
Just disasters waiting to happen, whether they found religion, conspiracy theories, or now LLMs.
I understand what you are saying, but this seems like a very low bar for prior history. Many people are into stuff like that without ever going crazy, which contradicts your assertion they are all disasters waiting to happen. It's also possible for many people to end up believing in some fringe things, but still remain functional.
Of course most people who are a little bit mad are still functional, the point is that gpts are like having a similarly mad friend who encourages you further away from reality when what you need is someone who can get you grounded.
The problem here is we have no baseline statistics.
I'd say my family is a great example of undiagnosed illnesses. They are disasters already happening waiting for any kind of trigger.
These undiagnosed self medicate on drugs and end up in ERs to the surprise of those around them at a disturbing rate. Hence why we need to know the base rate of mental occurrence like this before we call AI caused incidents an epidemic.
Yes. This story links to an earlier story from the same publication [1] that states:
> As we reported this story, more and more similar accounts kept pouring in from the concerned friends and family of people suffering terrifying breakdowns after developing fixations on AI. Many said the trouble had started when their loved ones engaged a chatbot in discussions about mysticism, conspiracy theories or other fringe topics; because systems like ChatGPT are designed to encourage and riff on what users say, they seem to have gotten sucked into dizzying rabbit holes in which the AI acts as an always-on cheerleader and brainstorming partner for increasingly bizarre delusions.
So these people were already interested in mysticism, conspiracy theories and fringe topics. The chatbot acts as a kind of “accelerant” for their delusions.
Or, they're just protecting their property value and neighborhood composition, like anyone else would do.