Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI makes the entire process less reliant on human involvement. Humans can be incredibly cruel, but they still have some level of humanity, they need to believe in some ideology that justifies their behaviour. AI on the other hand follows commands without questioning them.


The AI still will be used by a human, unless you think that the actual power moves to the AI and not the humans and their institutions using it.


It removes the human from the actual thing. Which means two things: it's easier for humans to tell themselves that what they are doing is good for the world, and second it puts a power multiplier into the hands of potentially very radical people holding extreme minority views on human rights/dignity etc.

You just need one person to type "kill every firstborn and tell the parents that this is retribution for the rebellion" instead of that one person convincing thousands of people to follow their (unlawful) order, which might take decades of propaganda and indoctrination. And that one person only sees a clean progress bar instead of having to deal with 18-20 year olds being haunted by nightmares of crying parents and siblings, and baby blood on their uniforms.

Yes, in the end it's a human typing the command, but AI makes it easier than ever for individuals to wield large power unchecked. Most nations make it allowed and many also require soldiers to refuse unlawful orders (i.e. those instructing warcrimes).


That kind of AI does not exist and might not for a long time or indeed ever (i.e., it can fully manipulate the physical world at scale and unopposed).


The AI doesn't have to be 100% self reliant. It might just do the dirty work while the humans in the background support it in an abstracted form, making it easier for them to lie to themselves about what they are doing.

When your job is to repair and maintain infantry drones, how do you know if it's civilian or combatant blood that you are cleaning from the shell? You never leave the camp because doing so is too dangerous and you also don't know the language anyways. Sure the enemy spreads propaganda that the drones are killing babies but those are most likely just lies...

Or when you are a rocket engineer, there is little difference between your globe spanning satellite network transmitting the position of a hypersonic missile headed for your homeland, or remote controlling a network of drones.

AI gives you a more specialized economy. Instead of every soldier needing to know how to maintain their gun you now have one person responsible for physical maintenance, one person responsible for maintaining the satellite kubernetes network, one UI designer, and one lunatic typing in war crime commands. Everyone in the chain is oblivious of the war crime except for that one person at the end.


Automated, remote commanded weapons exist already and development is happening quite rapidly on more. Automated where the whole realm of physical manipulation is not automated under AI control, but enforcement largely is through combat drones coordinated by AI is not far from being achievable.

And it doesn’t need to even be totally that to be a problem, each incremental bit of progress that reduces the number of people required to apply power over any given size subject population makes tyranny more sustainable.


ChatGPT plugins can go out and access the internet. The internet is hooked up to things that physically move things -- think stuff like industrial SCADA or bridge controls.

You can remotely pan a gun around with the right apparatus, and you can remotely pull a trigger with a sister mechanism. Right now we have drone pilots sitting in trailers running machines halfway across the world. Who is to say that some of these human-controlled things havent already been replaced by AI?

I think AI manipulating the physical world is a lot closer than you think.


There's a pretty massive gap between "look, LLMs can be trained to use an API" and "As soon as somebody types the word 'kill every firstborn' nobody can stop them".

It isn't obvious that drones are more effective killing people without human pilots (and to the extent they are, that's mostly proneness to accidentally kill people with friendly fire or other errors) and the sort of person that possesses unchecked access to fleets of military drones is going to be pretty powerful when it comes to ordering people to kill other people anyway. And the sort of leader that wants to replace soldiers with drones only his inner circle can exercise control over because he suspects that the drones will be more loyal is the sort of leader that's disproportionately likely to be terminated in a military coup, because military equipment is pretty deadly in human hands too...


My point is that the plumbing for this sort of thing is coming into focus. Human factors are not an effective safeguard. I feel like every issue you have brought up is solvable.

Today, already, you can set up a red zone watched by computer vision hooked up to autonomous turrets with orders to fire at anything it recognizes as human.

Some guy already did it for mosquitos.


> Today, already, you can set up a red zone watched by computer vision hooked up to autonomous turrets with orders to fire at anything it recognizes as human.

Sure. You could also achieve broadly similar effects with an autonomous turret with a simple, unintelligent program that fires at anything that moves, or with the 19th century technology of tripwires attached to explosives. The NN wastes less ammo, but it is't a step change in firepower, least of all for someone trying to monopolise power over an entire country.

Dictators have seldom had trouble getting the military on side, and if they can't, then the military has access to at least as much AI and non-AI tech to outgun them. Nobody doubts computers can be used to kill people (lots of things can be used to kill people), it's the idea that computers are some sort of omipotent genie that grants wishes for everyone's firstborn to die being pushed back on here.


I'm not arguing that. But, now that we are hooking up LLMs to the internet, and they are actively hitting various endpoints, something somewhere is eventually going to go haywire and people will be affected somehow. Or it will be deployed against an oppressed class and contribute physically to their misery.

China's monstrous social credit thing might already be that.

No consciousness or omnipotence needed.


But this isn't fundamentally different than people writing scripts that attack such endpoints (or that attempt to use them benignly, but fail).

This is still a "human malice and error" problem, not a "dangerous AI" problem.


It doesn't need to manipulate the world. It only needs to manipulate people.

Look at what disinformation and astroturf campaigns accomplished without AI in the past ten years. We are in a very uncomfortable position WRT not autonomous AIs but AIs following human-directed agendas.


Yeah, not to mention that good guys far number bad guys and we can develop a good AGI to pre-detect bad ones and destroy.

Even with AGIs, there is still power in numbers. We still understand how all the mechanical things work. The first attempts of AGIs will be botched anyway.

It's not the AGI will secretly plan for years hiding itself and suddenly launch one final attack.


Some nations make it obligatory for soldiers to follow illegal orders, some make it obligatory to follow them even if they are blatantly criminal.


If it is an actual self-willed AGI, the power moves to the AGI. But the real and more certain and immediate problem than AGI defection is moving from power that requires assent of large numbers of humans (the foot soldiers of tyranny, and their middle managers, are often the force that defects and brings it down) to power requiring a smaller number (in the limit case, just one) leader because the enforcement structure below them is automated.


Not really, as you have to also assume that enforcement faces no opposition at any point (not at acquisition of capabilities, not at deployment, not at re-supplying etc.) or that the population actually cannot fight back.

Also, typical tyrannical leadership doesn't work that way as it tends to get into power being supported by people that get a payoff from it. It would also need a new model of rise to tyranny, so to speak, to get truly singular and independent tyrants.


> Not really, as you have to also assume that enforcement faces no opposition at any point (not at acquisition of capabilities, not at deployment, not at re-supplying etc.) or that the population actually cannot fight back.

Yes, that is exactly what I described as the limit case.

But, as I note, the problem exists more generally, even outside of the limit case; the more you concentrate power in a narrow group of humans, the more durable tyranny becomes.


One bad human can control many AIs with no empathy or morals. As history has shown, they can also control many humans, but they have to do a lot more work to disable the humans' humanity. Again it's about the length of the lever and the sheer scale.

"Give me a lever long enough and a fulcrum on which to place it, and I shall move the world." -- Archimedes


> Humans can be incredibly cruel, but they still have some level of humanity

History is filled with seeming counter-examples.

> they need to believe in some ideology that justifies their behaviour.

"Your people are inferior to my people and do not deserve to live."

I know I'm Godwin's Law-ing this, but you're close to wishing Pol Pot & Stalin away.


I don't see that in OP's comment. Pol Pot and Stalin still justified their actions to themselves. In general, the point is that, aside from a small percentage of genuine sociopaths, you need to brainwash most people into some kind of religious and/or ideological framework to get them to do the really nasty stuff at scale. Remember this Himmler speech to SS?

"I am now referring to the evacuation of the Jews, the extermination of the Jewish people. It's one of those things that is easily said: 'The Jewish people are being exterminated', says every party member, 'this is very obvious, it's in our program, elimination of the Jews, extermination, we're doing it, hah, a small matter.' And then they turn up, the upstanding 80 million Germans, and each one has his decent Jew. They say the others are all swines, but this particular one is a splendid Jew. But none has observed it, endured it. Most of you here know what it means when 100 corpses lie next to each other, when there are 500 or when there are 1,000."

This does not preclude atrocities, obviously. But it does make them much more costly to perpetrate, and requires more time to plan and condition the participants, all of which reduces both the likelihood and the potential scope.


I think the point isn't that that we're so good at brainwashing other humans into committing atrocities it's difficult to see what AI adds (the sort of societies that are somewhat resistant to being brainwashed into tyranny are also pretty resistant to a small group of tyrants having a monopoly on dangerous weapons)


I wouldn't say that democratic societies are resistant to concentration of monopoly on violence in a small group of people. Just look at police militarization in US.

As far as brainwashing - we're pretty good at it, but it still takes considerable time and effort, and you need to control enough mass media first to maintain it for the majority of the population. With an AI, you can train it on a dataset that will firmly embed such notions from the get-go.


> I don't see that in OP's comment.

I don't know what "that" is. I quoted the post I was responding to.


By "that" I meant "wishing Pol Pot & Stalin away". That aside, I tried to address both points that you quoted.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: