show your work. Even the existing version of AI clearly enables very high scale misinformation campaigns, the analog/basic/manual version of which (e.g. Fox, Newsmax, 4chan/QAnon, Facebook, etc.) has already put American democracy in a super dire situation. And that's before its impact on income inequality, jobs, and so on.
No, it's a cheaper guided missile to stretch the analogy. The invention of accurate guided fire systems was revolutionary in warfare. It means you have to figure out a way to keep things working even though any large and valuable grouping of logistics or materiel or men can be destroyed with nearly no recourse. The only saving grace was that guided munitions are very expensive, so you couldn't really (other than America in the gulf but that's not a peer adversary) task each and every foot soldier of the enemy with a guided munition.
Well, now we have $500 drones with an RPG strapped to it, and you CAN task a guided munition to each and every soldier and piece of equipment. There is no safe space in the field. You have to dig holes in the ground and hide from the sky. The lethality of war has gone way up.
These "AI" models are the same way. Before, it was expensive and effort consuming to run a scam or phishing or anything that requires social engineering. That's why the target the soft and valuable targets, like grandma, or people on MLM mailing lists, or large and juicy companies. But now, the cost of a phishing or scam campaign can be heavily reduced. Think of how often companies fail internal phishing tests, and those emails are usually pretty basic, low effort, trivially identifiable as phishing. It's a massive upgrade to troll farms. They don't have to limit their targeting as much, because cracking harder targets will be cheaper to attempt. It doesn't matter if it isn't actually more efficient or more effective, just the fact that targeting someone and initiating a campaign against them is cheaper is all that was needed to make this situation brutal.