Hacker Newsnew | past | comments | ask | show | jobs | submit | landfish's commentslogin

Palisade Research | ML Research Engineer | ONSITE | Berkeley, CA | VISA

We research dangerous AI capabilities to better understand misuse risks of current systems, and how advances in hacking, deception, and persuasion will affect the risk of catastrophic AI outcomes.

See the full job ad and application here: https://palisaderesearch.org/work

We plan to study emerging dangerous capabilities in both open source and API-gated models across automated hacking, spear phishing and deception, and scalable disinformation.

We are looking for people who excel at: - Working with language models, which includes supervised fine-tuning, using reward models/functions (RLHF/RLAIF), building scaffolding (e.g. in the style of AutoGPT), and prompt engineering / jailbreaking. - Software engineering. Alongside working with LMs, much of the work you do will benefit from a strong foundation in software engineering—such as when designing APIs, working with training data, or doing front-end development. - Technical communication. By writing papers, blog posts, and internal documents; and by speaking with the team and external collaborators about your research.

Salary: We’re offering between 150,000 - 250,000 USD, depending on your skill and experience, plus healthcare.


Note: I wrote this post and cross-posted it to the EA Forum, and there are some good comments there discussing the longer-term risks of nuclear war that I didn't include in the original post: https://forum.effectivealtruism.org/posts/mxKwP2PFtg8ABwzug/...


PDF version is free online too. https://www.oism.org/nwss/ :)

The citation for "The Robock group’s models are probably overestimating the risk [of nuclear winter]." are the Reisner et al. articles I linked. My conclusion is just my opinion after reading the back and forth in the literature, but I encourage other people to go through it and come to their own conclusions!


I've heard a lot of people say this. Wikipedia has a decent summary of the history of this position: https://en.wikipedia.org/wiki/Nuclear_holocaust#Origins_and_...


In several decades of seeing stray discussions online about nuclear war, including here on HN, I've yet to see a discussion where human extinction wasn't a commonly assumed outcome.

It's almost always thrown out as a matter-of-fact inevitability, it's assumed. It appeals to the self-hating cult of 'humans deserve it.' I'd wager the people that float that theory more often than not tend to hate humanity and are lusting for its destruction, so that appeals to them, they want it to happen and are projecting irrationally outward.


I wrote this post, and the point is to improve discussion of societal risks. As others have pointed out, nuclear war doesn't have to kill everyone to be extremely bad and worth avoiding. I spend a lot of my time thinking about risks from nuclear and biological weapons, and downplaying these risks is not my purpose.

One problem with assuming nuclear war kills everyone is that this discourages anyone from preparing for potential nuclear wars. While of course we should try to prevent it, we should also try to mitigate the consequences in the even a war does occur!

I've written more about risks from nuclear war here: https://www.lesswrong.com/posts/rn2duwRP2pqvLqGCE/does-the-u... and here: https://jeffreyladish.com/one-hundred-opinions-on-nuclear-wa...


This is mostly Herman Kahn redux.

https://en.wikipedia.org/wiki/Herman_Kahn

When there were 3 TV channels in the 1960s, in prime time programming you would often hear discussions just like this, from Kahn himself at times. Topic and people inspired the movie Dr. Strangelove and was at the forefront of the national consciousness in the late 1950s and early to mid 1960s.

As a set of ideas and concerns, it was beaten to death at the time, revived a bit during the “nuclear winter” thesis of the 1980s, and always was and continues to be an active issue at the Federal level. Mismanage nukes and you can be booted out even if you are the Secretary of the Air Force.

https://www.reuters.com/article/us-usa-airforce-idUSWAT00960...

It is a very, very deep rabbit hole. Could be fun to see how far you can get with unclassified sources, for various peculiar values of “fun” until you turn into a character from Strangelove yourself.

The classic starting point is On Thermonuclear War, 1960, Kahn.

Now that I think of it, Kahn died in the early 1980s. If he reincarnated immediately, he’d be in his mid 30s.

How old are you, landfish? Is that you, Kahn?

For everyone else, repeat after me, nuclear weapons are weapons of policy, not weapons of war. Nuclear weapons are weapons of policy, not weapons of war...


> For everyone else, repeat after me, nuclear weapons are weapons of policy, not weapons of war.

Some Japanese people would disagree with you.


It shows that people from WW2 doesn't live anymore. People forget: https://en.m.wikipedia.org/wiki/List_of_nuclear_close_calls


Kahn actually ran studies to come up with his numbers of a nuclear war causing a ten year economic regression and only killing 50% of a country. His point was that on a long enough timeline it would/will eventually happen, and with adequate preparation a country could ensure both objective victory and survival for the majority of its populace and civilization.

Someone has to think about the unthinkable. Wishing nukes away won't make them disappear, and as shown in Daniel Ellsberg's the Doomsday Machine, nuclear weapons have been used as weapons of war in U.S. foreign policy for decades as indirect threats and bluffs. They will eventually be used again as explosive ordinance. Civil defense was lambasted and deprioritized in the west because people thought with their hearts instead of their minds with regard to the destructive power of nuclear weapons.

Decades of poor science on nuclear winter based off of the terrible TTAPS climate model have engendered the notion that nuclear war is a death sentence to humanity. It's not true, academia knows it is not, but nobody will push the subsequent corrective nuclear winter stories because the fear of nuclear winter keeps nuclear hawks at bay in political office. The top answer in this quotation question has a ton of links on this: https://www.quora.com/Is-the-nuclear-winter-a-hoax

They are weapons of war. They are effective weapons of war. They will eventually be used as explosive ordinance and not just threats and bluffs. Calling them unthinkable and anyone who plans for survival following their use a horrible person is bad emotional rhetoric. We should have an implementable plan for civil defense like China and Russia do. We should not allow fear to prevent the survival of our way of life.


Ellsberg's The Doomsday Machine: Confessions of a Nuclear War Planner is one of the few books that has literally given me nightmares.

Imagine reading memos with stuff like "we plan to kill at least 600 million people, probably a lot more, a lot of them our allies"....


There were only 3 billion on the planet at the time, too. Estimates for U.S. casualties were 120 million dead in 1969. Out of 202.7 million

That still would have left the U.S. as one of the largest countries on the planet, and fallout would largely have dissipated after two to six weeks. Congress, the president, and the federal reserve agents would likely have survived thanks to top secret bunkers and Looking Glass. The military would have been wiped out. Food production would be largely unaffected for grains and vegetables, but nearly every nonhuman animal in the midwest, coastal cities, and big sky parts of the country would be dead from fallout. Food delivery and distribution would rely largely on coal rail and river barge transport, but it still did mostly at the time. It would take ten years to rebuild a refining capacity and restore the power grid and military Things would eventually return to normal. The cities would be rebuilt and reinhabited.

I know it is hard to believe, but look at Hiroshima, Nagasaki, German firebombed cities from WW2 like Dresden; they were rebuilt from the ashes. Germany was nearly entirely destroyed by WW2. It came back.

Barring nuclear winter actually occurring (which the Kuwait oil field fires kinda proves won't happen), total nuclear war is something countries can recover from, which makes it even more likely to happen in the future. As Ellsberg put it, "any nonzero chance is crazy."


The first part of the book is available online:

https://apjjf.org/-Daniel-Ellsberg/3222/article.html

The exact quote that stunned me being:

"The total death toll as calculated by the Joint Chiefs, from a U.S. first strike aimed primarily at the Soviet Union and China, would be roughly 600 million dead. A hundred Holocausts.

I remember what I thought when I held the single sheet with the graph on it. I thought, this piece of paper should not exist. It should never have existed. Not in America. Not anywhere, ever. It depicted evil beyond any human project that had ever existed. There should be nothing on Earth, nothing real, that it referred to."


> Nuclear weapons are weapons of policy, not weapons of war

That almost sounds plausible if you don't know the long history of nuclear weapons being almost used and avoided by sheer luck. Your thesis also breaks down once you remember that the USA is building 'small-scale' nuclear weapons with the precise intention of making them usable in current conflicts.

Not to mention that the US have recently unilaterally withdrawn from the disarmament agreement with Russia, under a supposedly Russia-friendly president.


>nuclear weapons being almost used and avoided by sheer luck.

Sheer luck is an odd way to describe "so much process and procedure and checks that when something does slip through the cracks it is correctly identified as anomalous and nobody gets nuked". Sure it might not make you feel warm and fuzzy inside and near misses show where to improve the system but to pretend it's just "sheer luck" despite god knows how many man hours being spent on the problem is naive, hyperbolic and infantile.


The post discusses only first order effects. In case of nuclear war most people will die not from kinetic shock or radiation, but from famine and deceases caused by collapse of supply chain and medical infrastructures, and from unavoidable violence during the fight for scarce remaining resources.


Exactly!

The intangible disruptions of a human society will indirectly cause eventual death due to the nuclear war. Think about it, just the COVID-19 lockdown of a few weeks can cause economy collapsed, people out of jobs, can't paid rent, utilities. A nuclear war will immediate cut off or contaminate essential human necessities, like food, water, shelters, electricity etc., so even without getting harmed by kinetic force, radiation, climate alteration, people still cannot survive long term.


I always found that more frightening than total extinction. I can't imagine all humans blinking out in a few hours. I can imagine having no food or money, trying to work dangerous odd jobs here and there and starving to death along with 30% of my town during a winter famine a few years later.


Think of life outside the Western world for a bit. This was the case for most of colonised Asia, and is still is a thing for some places in Africa.


Those effects won't cause extinction though because they'll diminish as the population shrinks so it'll reach some equilibrium. This is about extinction, not just lots of people dying. Nuclear winter wouldn't get less harmful the fewer people are around.


Very few people would choose to have children and raise them in a post-nuclear war environment, even if the radiation didn't make viable babies impossible. It's possible that the population would die out simply because it wouldn't be replaced quickly enough.


Some people don't "choose to have children." They feel like fucking so they do, and children happen afterwards.

I wouldn't expect that to change after a nuclear war, especially in a world without contraceptives.

That aside, there really needs to be more understanding of the difference between species extinction and cultural extinction.

Humans could survive for hundreds of millennia in a devolved state. They wouldn't be particularly human by our standards, but they'd have the same DNA.

But our culture would be gone. All the accumulated knowledge would be wiped out, and it's very possible nothing like it would ever return.

DNA is fairly robust. Culture, knowledge, and rationality are extremely fragile. We're not doing an outstandingly job of preserving them even without a nuclear war, and I can't imagine a nuclear war would make the situation any better.


Just continue this thread of contrarianism, I doubt we could eradicate human knowledge from the planet if we tried at this point. It is encoded in our minds, our books and our silicon chips.

We may lose the know-how / capacity to produce certain things, such as we did with Damascus steel or roman concrete, but we are an endlessly resourceful species. I place my faith in our ability to adapt and come up with new solutions as required.


If all humans aren't eradicated by nuclear war, I don't see why all memory, books, hard drives, other archives, computers, power plants, machinery, or even all contraceptives would be.


Computers and hard drives have half-life counted in years, and need energy and communication infrastructure to operate. Post-collapse, once all the computers break down, that's the end of XX-century technology. Nobody is going to make new ones, because to build and maintain machines that make computers you need working computers. Same for mining and refining necessary resources, controlling chemical processes, etc.

Whatever technological knowledge survives in the books, most of it will be useless for centuries, as we regress into pre-industrial level of technology and can't climb back out - we've already mined out all easily-accessible high-density energy sources that are necessary for reindustrialization.


Why would we regress to pre-industrial level of technology? Even if you somehow wipe 90% of the population worldwide, this brings the population back to mid-18th century levels - which is to say, when industrial revolution was already ongoing. But that same number of people would know all the things that had to be discovered back then, and would still have a lot of machinery, and a lot of already-refined materials, to bootstrap from.

Nor is there a particular shortage of hydrocarbons to burn, with consumption reduced so much due to population loss and reduction in quality of life. Consider that US today emits more than 200x carbon into the atmosphere than it did in 1850, and that this growth has been exponential. So what we consider one year worth of reserves today, could provide energy for many decades in this hypothetical.


If resources are already mined, why we need to mine them again?

Simple computers are relatively easy to make with today tech. Software exists already.


That's nonsense. People had children through the ice age! Poor struggling Africans and Indians keep having children. The people who choose not to have children seem to be contented wealthy professionals. If you're able to keep yourself alive, it's not much more work to keep children alive.


>One problem with assuming nuclear war kills everyone is that this discourages anyone from preparing for potential nuclear wars.

This is what the thinking was during the early part of the cold war, the whole duck and cover era. The decades after that and the knowledge gained during the cold war is what lead to our current fears and justification for focusing on disarmament rather than nuclear war prep.

Whether or not humans end up totally extinct is irrelevant. Enough post nuclear fiction has been produced that people are well aware, yes, we may not all die, but life sure is going to suck a whole lot for literally every living thing on Earth.

That's the point of preventing nuclear war, that's the point of the world's current attitude towards it.


Whether or not humans end up totally extinct is irrelevant to the questions like whether nuclear war is bad and whether we should work to prevent it.

However, the question whether or not humans end up totally extinct is relevant when comparing nuclear war with other very horrible risks, some of which do threaten total extinction. To discuss, prioritize and act on very serious (though, hopefully, unlikely to materialize) risks, we need a scale of "horribleness" that doesn't just give every serious problem a 10 out of 10. We need to reasonably prioritize our actions among many potentially dystopian risks - if consider a single one of them an negative infinity, a literal "AVOID AT ALL COSTS", then those "all costs" include increased vulnerability to other risks, and that's not appropriate.


What other risks are there that are directly intentionally human driven that could cause such an extreme amount of destruction, death and societal collapse over the span of a few hours?


Meteorite strike or supervolcano eruption that we could have protected ourselves against but didn't because of resources spent on nuclear war prevention.

Not sure why you added the "within a few hours" qualifier or "intentionally human driven". Have you already thought of something that takes longer, is done by humans accidentally, or is natural? Is it climate change? Does that already answer your question?


Yeah pretty much, because there is nothing else that compares in that way. Climate change is long and drawn out, meteors and disasters are acts of nature.

There is nothing else so swiftly and completely destructive that is entirely driven by humans and their choice as to whether they should destroy or not.

That's the thing about nuclear war, unlike all those other things, it really comes down to a few humans deciding not to do it. That's what makes it more terrifying than those other ones.


The suddenness is a problem because there isn't enough time to adapt? Implying that we'll adapt to climate change? Watch out for heresy there ;)

But asteroid or comet impacts. Comets in particular can come so fast and undetectably that we won't have time to react. We could prevent a comet impact if a few humans decided to build such a defense system, and we might be wiped out if those few humans decide not to do it. Sounds equivalent to nuclear war.


In near-ish future I guess deorbiting sufficiently large asteroids? In maybe further future AI and aliens.


> life sure is going to suck a whole lot for literally every living thing on Earth.

Not at all. Many organisms would still find it easy to survive in a post-nuclear-war world and would eventually evolve to fill the roles of those that went extinct.


> One problem with assuming nuclear war kills everyone is that this discourages anyone from preparing for potential nuclear wars

It also encourages lazy cynicism.

If everyone died in a nuclear war humanity got its just desserts. Mix in the West’s discomfort with death and nuclear holocaust becomes a punch line. But if our choices today will cause millions to suffer for generations, the cost becomes more tangible. The threat becomes detailed and gruesome and that makes it relatable.


> try to mitigate the consequences in the even a war does occur!

in the case where mitigation is deemed acceptable, it would paradoxically make war more likely!

That's why a defensive missile shield technology is not a good investment if you want more peace. The existing MAD mantra is what stops nuclear war. It should continue until humanity unifies into a single world gov't.


> That's why a defensive missile shield technology is not a good investment if you want more peace.

That depends on whether you go for a 'weak' shield or a 'strong' shield. A weak shield may defend a dozen or two warheads: something that North Korea would launch. Japan has a 'weak' shield AFAICT:

* https://www.rand.org/pubs/monograph_reports/MR1374.html

A strong one would protect against hundreds: another major power. IMHO only the latter would destabilize peace, while the former would help defend agains crazies and potentially rogue operations.

> The existing MAD mantra is what stops nuclear war.

There are a bunch of criticism of MAD:

* https://en.wikipedia.org/wiki/Mutual_assured_destruction#Cri...


AFAIK anti-missile shields aren't a realistic option. Modern ICBM designs have multiple independently targetable warheads. There are also submarines with nuclear missiles. You can also launch decoys (eg: aluminium foil or flares).

And uh. Recently, I was reading a book called Skunk Works. The author explains that, basically, the US was able to keep the existence of stealth fighters a secret from the public (and Russia) for years and years. He also mentions the idea of a stealth ICBM multiple times in the book (which was published in 1995). That's when it hit me. That technology already exists. Russia and the US probably have an arsenal of stealth ICBMs. They're just not publicly boasting about it, but if you think about it, there's no reason that technology wouldn't exist, particularly since military contractors were already thinking about it over 25 years ago.


Stealth ICBMs are not a thing. Re-entry prevents complete stealth, as the plasma generates a distinctive radar signature. You could use plasma stealth to make re-entry targets impossible to lock on, but they would be detected.

Also, the existence of stealth fighters was not a secret to the Soviets. They came up with the idea of a stealth fighter and figured out how to minimize the advantage they would give before the US did either, but decided not to pursue it. They were certainly aware of the US stealth program quite early on, and even before the US program started they knew it was possible. However, because of the problematic bureaucracy of the USSR from the 70s onwards this wasn't pursued for quite a while, and by the time the program reached maturity the collapse of the USSR was well on it's way.

That is to say, you can't really hide the existence of such big technologies and paradigms to near peer opponents - they probably thought of it before you've finalized your prototype.


I don't understand why you would keep such a thing secret. As Dr. Strangelove rightly pointed out, a doomsday device is pointless if nobody knows about it.


Knowing you have a stealth ICBM doesn't change a thing if there is no way for the other side to defend against the non-stealth ones, so it wouldn't be a deterrent to most possible adversaries, but the optics of disclosing such a thing developed in, say, the 90s wouldn't be great.

Those that might conceivably try to defend themselves against conventional ICBMs are few and might very well know such a thing exists, it may work as a deterrent in that context.

If such a thing exists at all, not quite sure how you might make an ICBM stealthy, couldn't think of a way to hide launch and reentry signatures...


Having just any weapon is worth keeping a secret. From the moment you have a weapon, and an enemy, the later will try to disarm you.


Not 'worth keeping a secret' - worth controlling the information revealed about it. In the case of nuclear weapons, you want to keep its weaknesses and fallibilities secret while trumpeting its existence from the rooftops. The more parties know you have a mega-weapon, the lower the chance you'll have to use it.


> The more parties know you have a mega-weapon, the lower the chance you'll have to use it.

But if you have to... the lower you have a chance to win, as the enemy with more than 1 gram of gray matter will do just anything to take it down before you can shoot.

Betting all of your strategy on assumption that the enemy is scared enough of you not to attack is stupidity beyond all bounds.

If you attack someone, especially if you attack somebody stronger than you, then you attack with all force available to you, and maximum ferocity.

You assume the enemy will willingly throw away his biggest chance to take away your biggest force multiplier, and the biggest chance to win?


> If you attack someone, especially if you attack somebody stronger than you, then you attack with all force available to you, and maximum ferocity.

True, IF you attack someone. But if you see someone who is unassailably stronger than you, that you couldn't possibly defeat, then you don't even go there. And that is the point of MAD. An enemy with one nuke is scary, scary enough to make them worth attacking. But an enemy with thousands of nukes, scattered across the globe, in unknown locations, which will launch at their signal (or lack of signal)... You just don't attack them.


And that's an extremely dangerous assumption. You don't need to dive deep into history for good examples of the opposite.

And if you dive deeper, you will find examples far more suicidal disparities in between warring factions.

Second, you do not count in a possibility of your opponent not even trying to win. If wars were fought by rational people, we would've long had math PhDs for generals.


But weak shields are only not destabilizing as long as there is complete awareness of its weak nature. Forms of reality denial are standard mode of operations in politics and good luck finding a war that didn't start with one form or another of reality denial.

On a tangent that doesn't really change ether we find weak shields bothersome or not: arguably all strong shield projects were just weak shields with reality denial built right into the sales pitch.


It's important to think hard about nuclear war. However the problem to focus on is that nuclear war is a lot more likely than most people think. Accidental launch in reaction to bad intelligence is a constant possibility; also the number of nuclear powers continues to increase. Civil war in any nuclear-arm nation is also something to consider. All of these could result in multiple strikes, which would unleash horrors not seen since WW II or before.


With respect, I think that’s a narrowly scoped proposition that ignores obvious perils with significant impacts.

Look at COVID-19 impacts on supply chains in the US and China, due to short term disruptions. Any significant nuclear conflict would have huge impacts. Destruction of Long Beach, Chicago and similar hubs would implode the economy in an extreme way.

It’s also a loss of control on the societal level that would unleash other horrors.


Very few things could actually make humans extinct at this point. It would take an impact like the one that produced the moon to finish us off. We're so adaptable that some of us will survive and rebuild. If we can ever get back to technological civilization without easily accessible fossil fuels is debatable, but I think we'd find a way - it would just be a lot harder and take longer.

What worries me is not extinction, but societal collapse. We've seen how disruptive covid19 was, and it was a baby event compared to things that have happened in earth's history.

Climate change and nuclear war both seem highly probable causes of societal collapse. Both are preventable if we can just find a way to not be collectively stupid. I don't how we do that, but it's insane we're not trying harder.


Realistically, if there was a full-scale nuclear war, famine and violence would kill 90-95% of people. It would also destroy a lot of our technological capabilities. There are only a handful of silicon fabs in the world, and they have complex supply chains. It could bring us back to some kind of a technological middle age, except we'd have guns, electricity and radios.


If 90% of the population dies, we're left with around 750 million people worldwide.

Last time the world had that many people in it was around 1750. How fast could the people back in 1750 get to the present technological level, if they had all the science and engineering and exploration already figured out, and also got a bunch of machinery to kickstart things?


Let's say 95 percent are killed. That still leaves around 400 million people. Let's say 95 of scientists are killed.

That might still leave around 4-500,000 scientists worldwide and millions of engineers, millions of medical doctors, tens of millions of technicians of different kinds. Sure, many of them will be in odd places like New Zealand, South Africa or Argentina. But that's likely where most survivors would be as well.

Just for comparison, until the 1940s the US and Europe produced around 10-20,000 doctorates per year. At no point back then did we have more scientists worldwide than we would have after 95 percent of scientists were killed today. And unlike back then, the surviving scientists would have knowledge of every scientific knowledge produced afterwards, either as personal memory or in books and databases.

It's quite possible the internet would survive as well. After all, it's designed for that kind of events. Thousands of libraries would survive. Millions of machines, vehicles, airplanes, computers, components etc. would survive. Wikipedia, the 100 most important scientific journals, the US patent organization database can easily be on a laptop and libraries around the world are making copies all the time.


The ammo and electricity would not last long in most parts of the world. I think a lot of communities would have real trouble even feeding themselves and a few hard winters could wipe them out.


Some electricity is not that hard to generate with late middle-age manufacturing techniques. It won't be a nice stable grid, but you'd likely be able to run e.g. mining pumps with power generated from a nearby water mill.

Similarly, some refrigeration is not super difficult to build once you know how it works. That would help quite a bit against starvation from bad harvests because it allows you to have bigger stores of food.


Solar panels and diesel generators are pretty ubiquitous these days. Actually building more of either is more complicated, but bootstrapping the capability to do so before existing supplies of either run out (and we're talking years here) is quite feasible.


You could scavenge electric motors, some metal sheets and a bunch of wood into a hydroelectric generator. So there would still be electricity.


Yeah, OK, I know how to wrap copper and spin it around magnets, and get _some_ electricity, but I don't know how to measure or manage the voltage and the current. I don't know how to make an inverter. I don't know how to make a light bulb or a battery (or really anything that "uses" the power). Exiting appliances only have only at most 20 years in them.

With everybody struggling to raise crops and livestock, let along trying to reinvent the light bulb, it would a long slow climb back to civilization.


> With everybody struggling to raise crops and livestock, let along trying to reinvent the light bulb, it would a long slow climb back to civilization.

People aren’t going to forget how to trade. The people with that manufacturing and engineering knowledge aren’t going to be busy with livestock. They’ll be trading these valuable goods for food.


What if the Southern Hemisphere isn't targeted? Fallout will mostly effect the northern hemisphere, and if there isn't a nuclear winter (or a milder one than projected), then the damage will be largely economic to those nations. But their infrastructure and ability to grow food would remain intact.

I suppose refugees and the political vacuum created would be significant problems.


IIRC the nuclear winter would not be as bad as once predicted, you could see a 10 celsius drop. That would still be enough to greatly reduce crop yields though.

If you couple reduced crop yields with millions of hungry refugees, economic instability and general chaos... Who knows. Unlikely to result in human extinction, but very painful for everyone no doubt. I do think it seems unlikely that people would nuke Australia and South America. I honestly hope that if nukes are ever fired, as many countries as possible remain untouched.


Regarding at least Australia for not being nuked:

Have you ever heard of https://en.wikipedia.org/wiki/Pine_Gap ?

While that is only one target, and in the middle of nowhere at that, do you still think so? For me that looks like a very valuable target to disrupt the operations of anyone who operates from there.


covid19 wasn’t that disruptive though beyond superficial shit. Nobody went hungry, water supplies kept going, electricity kept flowing, etc. We’re nowhere near the brink of any kind of regional societal collapse, let alone anything national or global.


I think the same about about CC. Humans as an animal, will survive. Humanity may not.


Honestly Covid-19 was actually quite reassuring for me. There was a few brief blips of panic buying, and a bit of "flee to the countryside", but broadly things kept functioning. People and organisations stepped up and kept things moving.

All those people who fled to the countryside eventually got bored and returned to the city, where things hadn't really changed much.


> While of course we should try to prevent it, we should also try to mitigate the consequences in the even a war does occur!

I used to have an old Stanford Research Institute report from the 1980s where the author tried to project that if the US and USSR had a full scale nuclear war, US GNP would be back to normal within five years.

Who knows what government bureaucrat had a program requiring an Orwellian report like to justify whatever policy.

Who is this we? If you want to store up MREs and tin foil hats and build a bomb shelter, be my guest. Most of our efforts will be to prevent a large-scale nuclear war from happening or needing to happen.


The hysteria over nuclear war has definitely done great harm to civil defense and preparedness in the USA. I have a print copy of Nuclear War Survival Skills[1] that I review regularly because an hour a year seems worth it to me.

[1] https://www.amazon.com/Nuclear-War-Survival-Skills-Expanded/...


Yes, the topic of the article is human extinction, and one can be pedantic and say, why stray off topic? However, conceptually close is near-extinction, with unfathomable suffering and waste, destruction of nearly all human culture, nearly all humans, and nearly all plant and animal life. It's worth a sentence or two to mention this in the article, to avoid confusion on the part of the reader that the author wishes minimize the negative effects of nuclear war - which would perhaps make it more acceptable in a way.

That said, the analysis seems sound, and despite the chance that the conclusion could be misconstrued or misused, it is important to remind voters graphically of the horrors of nuclear war. That's a better strategy to avoid both extinction and near-extinction, rather than counting on military planners to not overshoot their goals of near-extinction.


ok so no one has yet to comment this: another problem with assuming nuclear war kills everyone is that a nuclear warhead becomes a permanent get out of jail free card.

Everyone knows about few Russian invasions, Chinese Uighur genocide or Nagorno-Karabakh war and total inaction against these observed in various international communities, especially between nuclear capable nations. Incidents of this scale were enough in the past to start a World War or at least years long regional conflicts.

While a global (thermonuclear) war not occurring is good, genocides and invasions are generally not; I fear the fear of former is causing latter never to be held accountable.


Well I suggest you delete that post ASAP. I do not like notion of giving ideas how to solve Climate Change and cool down the planet Trump's way, 2 months before he left office.


> extremely bad and worth avoiding

Shrugs???? What exactly do you mean? Quantify this, how bad...with respect to AGI?

> discourages anyone from preparing for potential nuclear wars

Interesting, what would be your suggestions for preparing? Mine would be: AT ALL COST AVOID. (I mean it...ALL COST).

Anything else, and I would classify it as a species wide failure to adapt to technology (In this case EXISTENTIALLY dangerous tech). Further, there is no way to grasp or accurately predict confounding factors of post total nuclear war that would put humankind at further risk.


> Interesting, what would be your suggestions for preparing? Mine would be: AT ALL COST AVOID. (I mean it...ALL COST).

This gets at the heart of existential risk. Should we abandon all technology to avoid the possibility of nuclear war and just wait for an asteroid or climate change or resource exhaustion to kill us? Nuclear destruction only gets easier over time as technology progresses. My cell phone is faster than the supercomputers of the 80's used to model nuclear explosions. Even North Korea can refine uranium. Guided rockets are cheap enough to launch high school science projects. Hypersonic cruise missiles are on the horizon.

Generally I would propose we avoid nuclear war being an existential risk by going to the stars as soon as possible. That has other costs that, yes, should be balanced against the costs of nuclear war.

AGI is another path to a post-existential-threat future for humanity but probably has the greatest risks because it accelerates virtually all of the existing risky technology and climate trends, balanced only by the hope that its intelligence grows far faster than the risks and that it's value-aligned with humanity.


You’re making the common mistake to assume AGI significantly increases the advancement of technology. If AI research has demonstrated anything it’s Intelligence doesn’t increase linearly with computation. Compare two identical chess engines where one has 10% more computational power behind it and that program has a surprisingly limited increase in ELO score.

It’s seductive to think AGI will suddenly advance technology dramatically, but dumping more effort into existing technology has a bad habit of hitting diminishing returns. Ideas like nano-machines seem to have vast potential, until you realize biology is already operating at those scales and has significant limitations.


> It’s seductive to think AGI will suddenly advance technology dramatically, but dumping more effort into existing technology has a bad habit of hitting diminishing returns. Ideas like nano-machines seem to have vast potential, until you realize biology is already operating at those scales and has significant limitations.

There are a few reasons to think that AGI will significantly outproduce humans; it's likely cheap to run compared to the training costs. GPT-3 can generate text quite a bit faster than a human at maybe a billionth of the cost of its training, for example. Once achieved it's likely that practical progress on a lot of problems will actually be pretty rapid. Even if we only achieve parity with human intelligence and can never improve AGI beyond that we'll experience a few orders of magnitude in speedup of research.

I think it's a little early to write off nanotechnology when we haven't seen much more than tech demos at this point. Biology is a soup of proteins using brownian motion to find reaction sites. Highly impressive and amazing to be sure but needs to be wet, fed, and in a narrow temperature band. Precision molecular engineering is a game-changer in so many ways.


Producing text more cheaply is a poor metric for technological advancement. Going to the basics of say energy production, heat engines are already close to theoretical limitations. We already have 60+% efficient power plant so nothing can double future efficiency, and even in theory hitting 100% is never going to happen. The world has better technology and more infrastructure than 10 years ago, but it’s almost entirely evolutionary not revolutionary. Even the tantalizing self driving cars in the near future come with a massive downside of congestion from cars driving without people in them.

As to nanotechnology, we already design molecules to interface with biology via synthetic drugs, so essentially biological nanotechnology is going to be about creating larger structures not smaller ones.


> AGI is another path to a post-existential-threat future for humanity but probably has the greatest risks because it accelerates virtually all of the existing risky technology and climate trends

No, it is not. The reason is this: it is science fiction. This makes it no less, or no more, dangerous than anything else I can make up with a quasi realistic chance of existing in the near term future.

When thermonuclear bombs were detonated, they thought there was a possibility that the reaction would get away from them, and incinerate the atmosphere. The first test, they were scared this actually happened as the explosion carried on longer than they expected.

The point? They actually assigned a probability (non zero) to this event, and then carried on with the test anyway. So I could say that a hydrogen bomb, using some iteration of technology just beyond our grasp is likely to kill every human on earth in the near future. Am I right? Who knows..but in conjecturing this I am no more or 'less wrong' than anyone saying AGI is the greatest threat.


Urgh, atmospheric ignition was considered but it was studied 6 months before the first nuclear test was conducted: [1] https://www.realclearscience.com/blog/2019/09/12/the_fear_th...

The whole atmospheric ignition thing depends on physics which just doesn't work: specifically, that you can reach a temperature without gravitational pressure that triggers nuclear fusion at ambient atmospheric pressures.

It is in the same category as the "concerns" the LHC would create black holes which swallowed the Earth.


That’s not how it was presented in [0], which is one book I had read, among others. And that’s fine if that particular argument is incorrect, the logic of the point I’m making still stands. I can envision a feasible chance of a technological improvement to explosives that would make them ‘completely’ dangerous (kill every last human) if that’s what we’re going for.

[0] https://anniejacobsen.com/pentagons-brain/


If we're comparing instantaneous risks, then I agree with your arguments. The most likely thing that could kill the most people today (in the next 24 hours) is a large nuclear war, followed by things like a nearby supernova, gamma ray burst, etc. An asteroid large enough to wipe out half of humanity before Saturday is really unlikely to be close enough to not have been observed already.

In the first atomic bomb test the atmospheric ignition risk was not a conditional probability over the next 50 years or more but an estimate of P(ignition | nuclear detonation next week). Very tightly bound immediate risk. P(detonation next week) was ~1.

When risk is estimated over the next 50 or 100 years the probabilities change quite a bit, especially because it's possible (though unlikely) that every nation cooperatively disarms its nuclear stockpile. Disarmament is not possible today so it can't count against that risk. It is a factor in the 50-year risk.

Climate change, AGI, asteroids, pandemics, biowarfare, etc. all become potential risks in the longer time scales, but their conditional probabilities are much fuzzier. That doesn't mean it's useless to try to measure the probability; when assigning dollars to counter risk it makes sense to spend relative to the expected utility loss of each potential outcome, times the probability of being able to change the outcome. There's very little reason to spend money on short timescales to avoid a nearby supernova or a gamma ray burst pointed at us. There's nothing we can do. We can do things about nuclear weapons, climate change, AGI, etc. But we need accurate probabilies to avoid the failure of devoting 100% of GDP to the current biggest risk.

AGI is not a threat within the next day or probably the next year, and even if it was it would be almost certain that events are already in motion and nothing could be done. Dealing with AGI risk means spending an appropriate amount of money to offset risks that might appear more than a year or two out.

Searching the skies for unknown asteroids is worthwhile because only with enough advance warning can anything be done, and it's quite likely we could avoid a strike with enough warning (years).

It's in everyone's best interest to disarm every nuclear weapon but there's a strong defection incentive to be the last one with them which we also have to be realistic about. Achieving zero nuclear weapons is likely to require a one-world government which is unachievable in any short or medium timeframe. What do you propose we spend money on that actually moves the conditional probabilities away from global nuclear war? Do those proposals (even with ludicrous amounts of money) have a higher likelihood of success than risk-dollars-adjusted reduction of climate change for example? If not, then we can't prevent nuclear war at any cost as you originally proposed. In fact we can't prevent nuclear war in the first place; there is no set of concrete steps (short of globally synchronized sabotage) to take this year or even next. Human beings will have their fingers on the nuclear triggers for the foreseeable future. We have to mitigate the other risks that we can right now otherwise it's a waste of money. Hopefully as the world becomes more stable it will eventually become possible to actually prevent nuclear war through disarmament.


Here's the fundamental problem as I see it: probabilities are meaningless. We can talk about conditionals...potentials...likelihoods....whatever until we are blue in the face. All they do is provide a thin veneer over what I see is the truth: we simply have no real idea regarding most existential risks. Probability is a false god that justifies our wasting resources on problems we can't hope to solve, as we don't even understand them. AGI is the canonical problem of this type--absolutely no real understanding of the parameter space. What can it do...can it do this or that...what will it do...how to bound/align what it will do...???; well--just tell me what it is first. Aliens could show up tomorrow and smoke us (in theory), so what dollar amount should we spend on that (it's the same problem really...what capabilities will the aliens have so as to focus our efforts on their not smoking us).

> What do you propose we spend money on that actually moves the conditional probabilities away from global nuclear war?

First, let me say that I appreciate the frank discussion, and that I also appreciate the original author's willingness to author posts on this topic, these are in fact important issues. To answer your question, if it were my decision (I actually just responded to a couple RFPs with a sketch of a system for viral bioinformatics [which were declined]), I would say our largest current existential threat is our systems of government that have not made significant progress in the last couple hundred years. Regardless of how we score existential threats (I eschew probabilities personally), it's clear that we will need effective governance to respond to them as they arise (let's call covid a fire drill that we failed miserably at). We need research into questioning the fundamental assumptions by which we govern (secrecy, militancy, the paradigm of competition). How do we set goals as a world, not as nations? How do we engineer trusted environments, not zero trust environments? This is where my money would be.


I think everyone who uses probabilities to make decisions agrees that all our models are flawed in many ways. But how else do you propose to weigh different possibilities?

As for your proposal for how to spend money, I agree those are fruitful and promising directions. I think the fields of public choice theory and mechanism design are working on some of the problems you mention.


> AGI is the canonical problem of this type--absolutely no real understanding of the parameter space. What can it do...can it do this or that...what will it do...how to bound/align what it will do...???; well--just tell me what it is first. Aliens could show up tomorrow and smoke us (in theory), so what dollar amount should we spend on that (it's the same problem really...what capabilities will the aliens have so as to focus our efforts on their not smoking us).

I'd say aliens are in the supernova and gamma ray burst territory; even if we understood a lot about them we would be mostly powerless to counter them as an existential threat because there are very few actions we could take to deflect, defeat, or escape alien attack. Reasoning with them is probably the best we could do.

> I would say our largest current existential threat is our systems of government

Just to be clear; do you mean completely wiping out humanity or a reset to some small population that might have another chance at it?

> We need research into questioning the fundamental assumptions by which we govern (secrecy, militancy, the paradigm of competition). How do we set goals as a world, not as nations? How do we engineer trusted environments, not zero trust environments? This is were my money would be.

This is a lot closer than you may realize to what a lot of AGI existential risk folks research. AGIs will almost certainly be agents in the world; able to act and smart enough to understand their ability to act and affect the world and themselves. The correspondence to governance is pretty close; humans are pretty smart actors who understand a lot of the dynamics they have control over and have goals they want to achieve. AGI will have to fit into an ecosystem of humans and their governments in a beneficial way, and a big part of solving AGI risk is understanding governance, decision theory, goals, and values well enough to not create AGI that turns into a despotic warmonger to achieve its unaligned goals. We know that human agents can end up as despots.

Take the question of how to set goals as the world, not as nations. That extends down to the individual level as well; the coordination problem, unaligned incentives, the tragedy of the commons, decision theory, and other game theory problems can be extended from individuals vs. individuals to nations vs. nations. If anything, AGI will be easier to solve than human governance both because there will be fewer ethical restrictions and because we can (hopefully) engineer AGI in ways that align with human valued and incentives instead of trying to reason with humans.

A trusted environment is, if I take your meaning correctly, one in which nations have aligned their incentives so closely so as to be able to trust that the actions of other nations are honest and in good faith because any other options would be counterproductive. That's at least the sense I get from trust between individuals; after enough time with someone one can be fairly certain that they have enough shared values and goals to be able to come to similar decisions as one would, and trust them to act in accordance either through mutual shared interest (including friendship, admiration, respect, etc.) or the benefit of continued cooperation in the future. I suppose my first thought is that having "nations" to begin with breaks a lot of shared incentives because it overlays unevenly distributed wealth and opportunity with arbitrary legal and social groupings that nevertheless a lot of people care very deeply about. Getting nations to truly cooperate probably means giving up a lot of identity based on tradition, origin, and history. I have no clue how best to convince people that what really matters about them and their neighbors is how individually happy, fulfilled, and free to explore they are regardless of their differences. Getting rid of false beliefs in zero sum economics and the futility of status signalling might be a start, but humans are sort of innately programmed for a lot of counterproductive beliefs about status, pride, fairness, worth, etc.

Not getting too far afield, I am also curious how you reason under uncertainty without probabilities. E.g. you don't know how things may turn out but have some ideas; how do you choose what to do? How will you know if you made a good or bad choice? Is it quantifiable?


> Just to be clear; do you mean completely wiping out humanity or a reset to some small population that might have another chance at it?

Either might occur, given deployment of nuclear or bio technology.

> Getting nations to truly cooperate probably means giving up a lot of identity based on tradition, origin, and history. I have no clue how best to convince people that what really matters about them and their neighbors is how individually happy, fulfilled, and free to explore they are regardless of their differences. Getting rid of false beliefs in zero sum economics and the futility of status signalling might be a start, but humans are sort of innately programmed for a lot of counterproductive beliefs about status, pride, fairness, worth, etc.

Yes, and I think one needs to realize that this is a 100 year project. Technologies like nuclear fusion, and quantum computing I put into this '100 year' category, that is those that would catalyze profound transformations of humanity (synthetic biology being another, Abstract General Intelligence another [I do not like the term 'artificial']) given about 100 years and sufficient resources.

So rebuilding and rethinking completely the way humans govern themselves I think is one of these 100 year projects. Things like nomadic citizens (humans without borders), economics with externalities an explicit factor, zero secrecy non zero sum international relations. All stuff that is easy enough to write down, but with no real chance of swift movement towards. Future economic systems are hopefully cooperative, perhaps further enabled by greater gains in digitizing the economy (though risks towards greater inequality that is further destabilizing seem just as likely) .

> reason under uncertainty without probabilities.

I got started in math and computation working on quant stuff during the first financial crash. It was a front row seat to how probability goes wrong (your copulas aren't worth the pixels you graphed them on). The idea is that you have to recognize some things just aren't accurately quantifiable. If you don't recognize that, you just place yourself in a situation where you have a false sense of certainty about uncertainty. You just simply can't assign a probability accurately to anything you so choose. It's not that probability isn't worthwhile, it most certainly is, the point is to be able to accept when you are dealing with a situation where it is non-informational (and likely even counter-informational).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: