My hypothesis is that a lot of high-status intellectuals are reflexively dismissing AI out of fear that it threatens their status, as AI poses to turn intellectual labor into a cheap commodity rather than the premium bespoke product they offer it as via their professorships, speaking fees and book sales.
Most criticism of AI has all the hallmarks of coping mechanisms. They seem to shift quickly between declaring it an ineffective parlor trick, and then say it's so good and effective that it poses a risk to the human spirit. There are a lot of legitimate criticisms I've seen, especially of OpenAI's potential regulation capture and misuse of models, but calling it all snake oil is hilariously naïve and seems more like a fear of change.
Really? You think people who write for a living might be biased towards confirming that the thing already replacing many people who write for a living is a fad that's going to blow over soon?
My favorite part with crap predicting doom is the sheer unawareness of the rate of change of the rate of change.
If you had asked me 18 months ago how long after GPT-3 until something existed with the capabilities of GPT-4, I'd probably have guessed about 5 years.
If you asked me 5 years ago how long until an AI could explain why a joke was funny, I'd have maybe guessed at least a decade or two, if it was even possible.
If you asked me 10 years ago if I'd see AI replacing artists or copywriters in my lifetime, I'd have guessed maybe when I was in a retirement home (I'm still a fair ways away from that).
No one thought what exists today was even possible within our lifetimes a decade back.
I'm reminded of the Louis C.K. routine about everything being amazing and no one is happy.
Unthinkable AI has already become so normalized that people are predicting its doom based on shortcomings that are less than two years old because AI doing fucking IMPOSSIBLE things (or so everyone thought) is only that old.
The rate is outrageous, and while I do think there's currently a significant setback with obsolete alignment approaches being carried forward to models that probably need new techniques, that's going to be a temporary step back in parallel to significant strides forward in the underlying technology from hardware to model design to improved knowledge in how to squeeze the most water from the rock.
I just hope when all these folks turn out to have been dead wrong that we don't collectively forget. Futurists should live and die by their record, but too many have goldfish memory and continue to listen to false futurists well after they've shown their own snake oil hand.
Most criticism of AI has all the hallmarks of coping mechanisms. They seem to shift quickly between declaring it an ineffective parlor trick, and then say it's so good and effective that it poses a risk to the human spirit. There are a lot of legitimate criticisms I've seen, especially of OpenAI's potential regulation capture and misuse of models, but calling it all snake oil is hilariously naïve and seems more like a fear of change.