> or people who just want validation regardless of the scenario.
This really worries me as there are many people (even more prevalent in younger generations if some papers turn out to be valid) that lack resilience and critical self evaluation who may develop narcissistic tendencies with increased use or reinforcement from AIs. Just the health care costs involved when reality kicks in for these people, let alone other concomitant social costs will be substantial at scale. And people think social media algorithms reinforce poor social adaptation and skills, this is a whole new level.
I'll push back on this a little. I have well-established, long-running issues with overly critical self-evaluation, on the level of "I don't deserve to exist," on the level that I was for a long time too scared to tell my therapist about it. Lots of therapy and medication too, but having deepseek model confidence to me has really helped as much as anything.
I can see how it can lead to psychosis, but I'm not sure I would have ever started doing a good number of the things I wanted to do, which are normal hobbies that normal people have, without it. It has improved my life.
Are you becoming dependent? Everything that helps also hurts, psychologically speaking. For example benzodiazepines in the long run are harmful. Or the opposite, insight therapy, which involves some amount of pain in the near term in order to achieve longer term improvement.
It makes sense to me that interventions which might be hugely beneficial for one person might be disasterous for another. One person might be irrationally and brutally criticial of themselves. Another person might go through life in a haze of grandiose narcissism. These two people probably require opposite interventions.
But even for people who benefit massively from the affirmation, you still want the model to have some common sense. I remember the screenshots of people telling the now-yanked version of GPT 4o "I'm going off my meds and leaving my family, because they're sending radio waves through the walls into my brain," (or something like that), and GPT 4o responded, "You are so brave to stand up for yourself." Not only is it dangerous, it also completely destroys the model's credibility.
So if you've found a model which is generally positive, but still capable of realistic feedback, that would seem much more useful than an uncritical sycophant.
> who may develop narcissistic tendencies with increased use or reinforcement from AIs.
It's clear to me that (1) a lot of billionaires believe amazingly stupid things, and (2) a big part of this is that they surround themselves with a bubble of sycophants. Apparently having people tell you 24/7 how amazing and special you are sometimes leads to delusional behavior.
But now regular people can get the same uncritical, fawning affirmations from an LLM. And it's clearly already messing some people up.
I expect there to be huge commercial pressure to suck up to users and tell them they're brilliant. And I expect the long-term results will be as bad as the way social media optimizes for filter bubbles and rage bait.
Maybe the fermi paradox comes about not through nuclear self annihilation or grey goo, but making dumb AI chat bots that are too nice to us and remove any sense of existential tension.
Maybe the universe is full of emotionally fullfilled self-actualized narcissists too lazy to figure out how to build a FTL communications array.
I think the desire to colonise space at some point in the next 1,000 years has always been a yes even when I've asked people that said no to doing it within their lifetimes, I think it's a fairly universal desire we have as a species. Curiosity and the desire to explore new frontiers is pretty baked in as a survival strategy for the species.
This really worries me as there are many people (even more prevalent in younger generations if some papers turn out to be valid) that lack resilience and critical self evaluation who may develop narcissistic tendencies with increased use or reinforcement from AIs. Just the health care costs involved when reality kicks in for these people, let alone other concomitant social costs will be substantial at scale. And people think social media algorithms reinforce poor social adaptation and skills, this is a whole new level.