Hacker Newsnew | past | comments | ask | show | jobs | submit | flr03's commentslogin

I'm not scared about AI recommending nuclear strikes, I'm scared about the human behind the keyboard delegating reasoning and responsability to something they think is always correct, something that can hide bias and flaws better than anything.


Some of the most reassuring and scariest things you can read are about the incidents that have already occurred where computers said "launch all the nukes" and the humans refused. On the one hand, good news! We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to. Bad news, it's been skin-of-our-teeth multiple times already.

https://www.warhistoryonline.com/cold-war/refused-to-launch-... - This isn't even the incident I was searching for to reference! This one was news to me.

https://en.wikipedia.org/wiki/Stanislav_Petrov#Incident - This is the one I was looking for.


> We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to.

previously no-one had spent trillions of dollars trying to convince the world that those computers were "Artificial Intelligence"


of course they did. That's the literal topic of War Games (1983). You should actually be somewhat reassured that we aren't living during the era of Dr. Strangelove where you had characters in the military industrial complex who were significantly more insane when it came to the beliefs of what computer systems and nukes can do.

There was a time when people wanted to dig tunnels with nukes https://en.wikipedia.org/wiki/Project_Plowshare


Digging tunnels with nukes sounds better to me than shooting them at each other!


All you need is a elephants foot burning into the ground and a way to direct it via partial cooling..


MacArthur recommend turning the Korea-China border into a nuclear wasteland to prevent further Chinese troop movements.

Thank god everyone has nukes now it keeps people sane!


> There was a time when people wanted to dig tunnels with nukes

The article seems to be about mining rather than tunnelling.

And the issue with the idea being? We also dig using explosives, there isn't an in-principle problem. Reading the wiki article it looks like the yields were excessive, but at the end of the day mining involves the use of things that go boom. It is easy to imagine small nukes having a place in the industry.


>And the issue with the idea being?

See the 'rationale' section of the article. The point of it was to rebrand nuclear weapons as multi-use 'peaceful' tools and drive acceptance for nuclear weapons programs. Which was a pretty standard tactic of military projects during the cold war.


The more available they are, the more likely they are to be used on humans.


> That's the literal topic of War Games (1983)

Probably ran out of tokens before the end, during training.

"How about a nice game of Chess?"


They had to do with "state-of-the-art radars", "military-grade communication systems", etc.


Yeah but they dealt with sota and military-spec systems their entire career and they know that it just means "lowest bidder".


Or "alignment" which means "let's ensure the AIs recommend launching nukes only when it makes sense to, based on our [assumed objective] values"


Yeah... the more I learn about nuclear weapon history the more I discount our society's long term viability. There are way too many frighteningly close calls already, and there are probably others that aren't widely known.

It's not just nukes that are concerning either. If we're unable to mitigate such a visceral existential risk, we aren't going to do any better with more subtle vulnerabilities. AI of course accelarates some risks and introduces new ones.

This doesn't mean we're doomed or anything, but if I had a magic portal to peer a few hundred years in the future and saw humans had been obliterated by nukes, runaway AI, some generated supervirus, runaway climate change, or some other manufactured risk I would be completely unsurprised.


We shouldn't be the least bit surprised no human has complied so far.

If they had, then we wouldn't be having this conversation. For all we know, there may be a vast multiverse of universes some with humans and we would only find ourselves having this conversation in one of the universes where no human pressed the button.


By that logic, it may actually be pretty common for rabbits to swallow the sun. We just haven't seen it happen because we're in the wrong universe and would've died it it happened in ours.


Anthropic Principle


> We have prior art that says humans don't just launch all the nukes just because the computers or procedures say to.

This relies on processes being in place to ensure that a human will always make the final decision. What about when that gets taken away?


I find it hard to imagine that the people in a position to kill those processes could ever be that zealously in love with AI, but recent events have given me a tiny bit of doubt.


I mean in the cases where higher command has said launch your nukes and lower command has not done so and everything turned out ok, I think to higher command it of course is good it worked out this time but it certainly also looks like a problem with the system that needs to be automated away. So a computer that will launch all nukes when ordered must look very appealing in contrast to humans who might save humanity.


I briefly got into a "rabbithole" of watching videos about trying to intercept BMs and glide hypersonic weapons, pretty interesting, decoys deployed in space... the outcome seemed to be not good, can't guarantee 100% interception


A missile will always be cheaper than a missile interceptor, and the interceptor will never be a 1:1 kill. Building a missile interceptor system ia a good way to get your strategic opponent to build a bigger stockpile.


Disagree on always being cheaper. Military planners are obsessed with the best weapons, such interceptors are pricey. But look at Israel: Iron Dome. ~$50k/shot. They deliberately built a dumb SAM because it was designed to go against dumb opponents--objects falling freely on a ballistic trajectory. While they are usually facing light stuff that isn't even worth that they have successfully engaged longer range stuff that costs many times what the interceptor costs.

Overall, though, the offense always wins this one because interceptors can only protect a limited area whereas missiles can go anywhere.


Iron Dome is a great example of my point. It is a $50k interceptor designed to take out a propane tank with a rocket strapped to it, not a real ballistic missile like a Scud.

Patriot missiles ($7MM) take out Scuds ($3MM).


I hope humans in charge are as wise now as they were then.


Surely that’s the definition of a quixotic hope.


I am scared of two things.

First, people being rubber stamps for AI recommendations. And yes, it is not unreasonable that in a dire situation, someone will outsource their judgment (day).

Second, someone at the Pentagon connecting the red button to OpenClaw. "You are right, firing nukes was my mistake. Would you like to learn more facts about nukes before you evaporate?"


If you think humans are going to delegate reasoning and responsibility to something, shouldn’t you also be concerned about the sorts of recommendations that thing is going to make?


If you found out the pentagon was using a magic 8 ball to make important war decisions what would you want to fix - our military leadership or the inner workings of the toy?


One of those sounds a lot easier than the other. The magic 8 ball toy company would also probably be pretty incentivized to not die in a nuclear holocaust.


Unless you're suggesting the toy company secretly rigs the magic 8 ball to never recommend nuclear war, I'll take my chances with the organizational changes.


That is indeed what I think the gp is suggesting I feel. And why not?


Because if your leadership is stupid enough to trust the 8 ball they should not be in charge??!


No kidding. But how hard is it to effect leadership change, especially in an organization where you have next to no leverage? Really hard.


The speed with which my technical cow-orkers and friends have started relying on the "AI Overview" only, in lieu of following any links, in search engine results (let alone not using search engines at all over chatbots) tells me reasoning and responsibility will be outsourced as soon as possible.

Humans are fundamentally lazy. The brain is an "expensive" organ to use.


One can try themself, for Claude is fine at waging war [1]. Notice the thoughtful UX, including the typing "I ACCEPT FULL RESPONSIBILITY".

[1]: https://nitter.poast.org/elder_plinius/status/20264475874910...


Be not scared about humans behind keyboards. Be scared about humans without keyboard, desk and no future beyond dhijhad, now getting nukes nearby because the age of empires has returned.


Trump's Golden Dome is literally advertised to help the U.S. win a nuclear war by leveraging AI.


Elon's involvement in the nuclear military complex https://www.mintpressnews.com/pentagon-recruiting-elon-musk-...


If it's so cumbersome why don't US companies pull out the EU market? bet they make money anyway don't they


LGTM


One similarity is, if I'm correct, Russia claimed that the naval base of Sevastopol was vital for Russian security.

The protection of the population and the illegitimacy of the current government was an argument developed by Russia at the time, it has not been yet by USA but I suspect this might start develop in the next few weeks.

The common ingredients to justify an invasion/annexation is a mix of: - Self-Defense, security - Historical, Geographical claims - Protection of the population - Moral sugar coating (we had no choice)


Greenland was not part of the USA for a 150 years. It is also not mostly populated by ethnic Americans who speak English as their mother tongue. Nor is there a large American military base there that Greenland / Denmark is trying to evict them from and potentially hand over to a geopolitical adversary. Greenland is also an island a couple thousand miles away from Denmark.

Honestly, it is such a surprise that the difference is not obvious.


That does not make Germany look any better but I find the "percentage on time" not very useful compared to the "years of delay" metric. And arguable a average/median delay per train would be better? Also some delay volatility data would be interesting.

If you look at France for example, 80% of trains are not punctual but the "total delays" is actually on the low range, France being on the large side with lots of lines, I would say that it shows that the delays (20% of the time) are actual shorts.


Nothing is perfect but living in the UK after living in France, I have now a lot more love for SNCF than I used too.


Actually I’m a taking the train everyday to go to work and I have barely any complaint with the SNCF.

Most of the time they do what they can to deal with issues.

I don’t feel like there are too much issues it’s just they are extremely bad at communicating issues when they happen.

Sometimes the train is not there when it should but on the screen it just disappears as if it passed. Most of the time it’s just 2-5 minutes late but you can’t know. Maybe it’s just late. Maybe the traffic is stopped. Who knows.

I just dont understand how they don’t have people whose job is just writing messages for the information screens.

What is worse is that in my region, they have a pretty decent community managers for live information but they only post information in twitter because why not. So they already have the people doing this work but those people are saying different things than what the screen shows. Just let them write things on the screens :D


Why is that? Better service?


Law is always subject to interpretation and as imperfect as it sounds it is better than no law at all. And I'm not talking about hate speech specifically. Using this as a tool to silence opposition is possible and made easy in countries that do not value and nurture independence of institutions and have rampant corruption, often countries with authoritarian leadership. UK is not exempt of criticism, it would be unhealthy not to, but comparing Russia/Putin with UK/Starmer makes it evident that you are more concerned by pushing a political agenda that by facts and reason.


No there is a thing call the law, those are passed by elected people and applied by a judicial system that is not the executive branch. Hope that helps.


> applied by a judicial system that is not the executive branch

You have the right to a trial. Defending yourself from a Federal felony charge only costs $250k+.

Given recent events, I think undermining civil liberties to expand executive powers is crazy.


I am nowhere advocating to expand executive power in my response.

edit: apologies for not getting your point, I actually think I'm in line. Being able to defend yourself in the US looks too expensive.


As a tech person the older I get the less tech interests me. Analogical is where I get the fun from, no more smart watch, smart tv, spotify, connected home things, automatic coffee machine, no thank you.


Almost no new technology is respectful enough to its users for me to consider making accomodations for it in my life.

It's not just that it's not fun. Any fun I derive is canceled-out by the inevitable loss.

I've felt white-hot blazing anger so many times when a feature is taken away by an "update" that I am not permitted to revert. I don't want to feel that feeling anymore.


It's such a complicated way to work though, you start another set of changes then you go back addressing comments then you go back updating the stacked branch and you might need to do that few times... Teams should focus on getting stuff merged in and not create massive PRs that live forever, life becomes so much easier.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: