Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a real negotiating tactic: https://en.wikipedia.org/wiki/Brinkmanship

If you convince people that AGI is dangerous to humanity and inevitable, then you can force people to agree with outrageous, unnecessary investments to reach the perceived goal first. This exactly happened during the Cold War when Congress was thrown into hysterics by estimates of Soviet ballistic missile numbers: https://en.wikipedia.org/wiki/Missile_gap






Chief AI doomer Eliezer Yudkowsky's latest book on this subject is literally called "If Anyone Builds it, Everyone Dies". I don't think he's secretly trying to persuade people to make investments to reach this goal first.

He absolutely is. Again, refer to the nuclear bomb and the unconscionable capital that was invested as a result of early successes in nuclear tests.

That was an actual weapon capable of killing millions of people in the blink of an eye. Countries raced to get one so fast that it was practically a nuclear Preakness Stakes for a few decades there. By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do. Which is a facetious argument when AI has yet to prove it could kill a single person by generating text.


> He absolutely is.

When people explicitly say "do not build this, nobody should build this, under no circumstances build this, slow down and stop, nobody knows how to get this right yet", it's rather a stretch to assume they must mean the exact opposite, "oh, you should absolutely hurry be the first one to build this".

> By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do.

False. This is not a bomb where you can choose where it goes off. The literal title of the book is "if anyone builds it, everyone dies". It takes a willful misinterpretation to imagine that that means "if the right people build it, only the wrong people die".

If you want to claim that the book is incorrect, by all means attempt to refute it. But don't claim it says the literal opposite of what it says.


Edward Teller worried about the possibility that the Trinity nuclear test might start a chain reaction with the nitrogen in the Earth's atmosphere, enveloping the entire planet in a nuclear fireball that destroyed the whole world and all humans along with it. Even though this would have meant that the bomb would have had approximately a billion times more destructive power than advertised, and made it far more of a doomsday weapon, I think it would also not have been an appealing message to the White House. And I don't think that realization made anyone feel it was more urgent to be the first to develop a nuclear bomb. Instead, it became extremely urgent to prove (in advance of the first test!) that such a chain reaction would not happen.

I think this is a pretty close analogy to Eliezer Yudkowsky's view, and I just don't see how there's any way to read him as urging anyone to build AGI before anyone else does.


The grandparent asked what money was in it for rationalists.

You're saying an AI researcher selling AI Doom books can't be profiting off hype about AI?


This reminds me a lot of climate skeptics pointing out that climate researchers stand to make money off books about climate change.

Selling AI doom books nets considerably less money than actually working on AI (easily an order of magnitude or two). Whatever hangups I have with Yudkowsky, I'm very confident he's not doing it for the money (or even prestige; being an AI thought leader at a lab gives you a built-in audience).


The inverse is true, though - climate skeptics are oftentimes paid by the (very rich) petrol lobby to espouse skepticism. It's not an asinine attack, just an insecure one from an audience that also overwhelmingly accepts money in exchange for astroturfing opinions. The clear fallacy in their polemic being that ad-hominem attacks aren't addressing the point people care about. It's a distraction from global warming, which is the petrol lobby's end goal.

Yudkowsky's rhetoric is sabotaged by his ridiculous forecasts that present zero supporting evidence of his claims. It's the same broken shtick as Cory Doctorow or Vitalik Buterin - grandiose observations that resemble fiction more than reality. He can scare people, if he demonstrates the causal proof that any of his claims are even possible. Instead he uses this detachment to create nonexistent boogeymen for his foreign policy commentary that would make Tom Clancy blush.


What sort of unsupported ridiculous forecast do you mean? Can you point to one?

I'm not the grandparent but the more interesting question is what could possibly constitute "supporting evidence" for an AI Doom scenario.

Depending on your viewpoint this could range from "a really compelling analogy" to "A live demonstration akin to the trinity nuclear test."


FWIW, in the case of Eliezer's book, there's a good chance that at the end of the day when we account for all the related expenses, it makes very little net profit, and might even be unprofitable on net (which is totally fine, since the motivation from writing the book isn't making money).



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: