Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It removes the human from the actual thing. Which means two things: it's easier for humans to tell themselves that what they are doing is good for the world, and second it puts a power multiplier into the hands of potentially very radical people holding extreme minority views on human rights/dignity etc.

You just need one person to type "kill every firstborn and tell the parents that this is retribution for the rebellion" instead of that one person convincing thousands of people to follow their (unlawful) order, which might take decades of propaganda and indoctrination. And that one person only sees a clean progress bar instead of having to deal with 18-20 year olds being haunted by nightmares of crying parents and siblings, and baby blood on their uniforms.

Yes, in the end it's a human typing the command, but AI makes it easier than ever for individuals to wield large power unchecked. Most nations make it allowed and many also require soldiers to refuse unlawful orders (i.e. those instructing warcrimes).



That kind of AI does not exist and might not for a long time or indeed ever (i.e., it can fully manipulate the physical world at scale and unopposed).


The AI doesn't have to be 100% self reliant. It might just do the dirty work while the humans in the background support it in an abstracted form, making it easier for them to lie to themselves about what they are doing.

When your job is to repair and maintain infantry drones, how do you know if it's civilian or combatant blood that you are cleaning from the shell? You never leave the camp because doing so is too dangerous and you also don't know the language anyways. Sure the enemy spreads propaganda that the drones are killing babies but those are most likely just lies...

Or when you are a rocket engineer, there is little difference between your globe spanning satellite network transmitting the position of a hypersonic missile headed for your homeland, or remote controlling a network of drones.

AI gives you a more specialized economy. Instead of every soldier needing to know how to maintain their gun you now have one person responsible for physical maintenance, one person responsible for maintaining the satellite kubernetes network, one UI designer, and one lunatic typing in war crime commands. Everyone in the chain is oblivious of the war crime except for that one person at the end.


Automated, remote commanded weapons exist already and development is happening quite rapidly on more. Automated where the whole realm of physical manipulation is not automated under AI control, but enforcement largely is through combat drones coordinated by AI is not far from being achievable.

And it doesn’t need to even be totally that to be a problem, each incremental bit of progress that reduces the number of people required to apply power over any given size subject population makes tyranny more sustainable.


ChatGPT plugins can go out and access the internet. The internet is hooked up to things that physically move things -- think stuff like industrial SCADA or bridge controls.

You can remotely pan a gun around with the right apparatus, and you can remotely pull a trigger with a sister mechanism. Right now we have drone pilots sitting in trailers running machines halfway across the world. Who is to say that some of these human-controlled things havent already been replaced by AI?

I think AI manipulating the physical world is a lot closer than you think.


There's a pretty massive gap between "look, LLMs can be trained to use an API" and "As soon as somebody types the word 'kill every firstborn' nobody can stop them".

It isn't obvious that drones are more effective killing people without human pilots (and to the extent they are, that's mostly proneness to accidentally kill people with friendly fire or other errors) and the sort of person that possesses unchecked access to fleets of military drones is going to be pretty powerful when it comes to ordering people to kill other people anyway. And the sort of leader that wants to replace soldiers with drones only his inner circle can exercise control over because he suspects that the drones will be more loyal is the sort of leader that's disproportionately likely to be terminated in a military coup, because military equipment is pretty deadly in human hands too...


My point is that the plumbing for this sort of thing is coming into focus. Human factors are not an effective safeguard. I feel like every issue you have brought up is solvable.

Today, already, you can set up a red zone watched by computer vision hooked up to autonomous turrets with orders to fire at anything it recognizes as human.

Some guy already did it for mosquitos.


> Today, already, you can set up a red zone watched by computer vision hooked up to autonomous turrets with orders to fire at anything it recognizes as human.

Sure. You could also achieve broadly similar effects with an autonomous turret with a simple, unintelligent program that fires at anything that moves, or with the 19th century technology of tripwires attached to explosives. The NN wastes less ammo, but it is't a step change in firepower, least of all for someone trying to monopolise power over an entire country.

Dictators have seldom had trouble getting the military on side, and if they can't, then the military has access to at least as much AI and non-AI tech to outgun them. Nobody doubts computers can be used to kill people (lots of things can be used to kill people), it's the idea that computers are some sort of omipotent genie that grants wishes for everyone's firstborn to die being pushed back on here.


I'm not arguing that. But, now that we are hooking up LLMs to the internet, and they are actively hitting various endpoints, something somewhere is eventually going to go haywire and people will be affected somehow. Or it will be deployed against an oppressed class and contribute physically to their misery.

China's monstrous social credit thing might already be that.

No consciousness or omnipotence needed.


But this isn't fundamentally different than people writing scripts that attack such endpoints (or that attempt to use them benignly, but fail).

This is still a "human malice and error" problem, not a "dangerous AI" problem.


It doesn't need to manipulate the world. It only needs to manipulate people.

Look at what disinformation and astroturf campaigns accomplished without AI in the past ten years. We are in a very uncomfortable position WRT not autonomous AIs but AIs following human-directed agendas.


Yeah, not to mention that good guys far number bad guys and we can develop a good AGI to pre-detect bad ones and destroy.

Even with AGIs, there is still power in numbers. We still understand how all the mechanical things work. The first attempts of AGIs will be botched anyway.

It's not the AGI will secretly plan for years hiding itself and suddenly launch one final attack.


Some nations make it obligatory for soldiers to follow illegal orders, some make it obligatory to follow them even if they are blatantly criminal.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: