There's a pretty massive gap between "look, LLMs can be trained to use an API" and "As soon as somebody types the word 'kill every firstborn' nobody can stop them".
It isn't obvious that drones are more effective killing people without human pilots (and to the extent they are, that's mostly proneness to accidentally kill people with friendly fire or other errors) and the sort of person that possesses unchecked access to fleets of military drones is going to be pretty powerful when it comes to ordering people to kill other people anyway. And the sort of leader that wants to replace soldiers with drones only his inner circle can exercise control over because he suspects that the drones will be more loyal is the sort of leader that's disproportionately likely to be terminated in a military coup, because military equipment is pretty deadly in human hands too...
My point is that the plumbing for this sort of thing is coming into focus. Human factors are not an effective safeguard. I feel like every issue you have brought up is solvable.
Today, already, you can set up a red zone watched by computer vision hooked up to autonomous turrets with orders to fire at anything it recognizes as human.
> Today, already, you can set up a red zone watched by computer vision hooked up to autonomous turrets with orders to fire at anything it recognizes as human.
Sure. You could also achieve broadly similar effects with an autonomous turret with a simple, unintelligent program that fires at anything that moves, or with the 19th century technology of tripwires attached to explosives. The NN wastes less ammo, but it is't a step change in firepower, least of all for someone trying to monopolise power over an entire country.
Dictators have seldom had trouble getting the military on side, and if they can't, then the military has access to at least as much AI and non-AI tech to outgun them. Nobody doubts computers can be used to kill people (lots of things can be used to kill people), it's the idea that computers are some sort of omipotent genie that grants wishes for everyone's firstborn to die being pushed back on here.
I'm not arguing that. But, now that we are hooking up LLMs to the internet, and they are actively hitting various endpoints, something somewhere is eventually going to go haywire and people will be affected somehow. Or it will be deployed against an oppressed class and contribute physically to their misery.
China's monstrous social credit thing might already be that.
It isn't obvious that drones are more effective killing people without human pilots (and to the extent they are, that's mostly proneness to accidentally kill people with friendly fire or other errors) and the sort of person that possesses unchecked access to fleets of military drones is going to be pretty powerful when it comes to ordering people to kill other people anyway. And the sort of leader that wants to replace soldiers with drones only his inner circle can exercise control over because he suspects that the drones will be more loyal is the sort of leader that's disproportionately likely to be terminated in a military coup, because military equipment is pretty deadly in human hands too...