Wish people stopped with that Cold War narrative. You're not waging anything just yet.
Here's the thing: the US didn't win the OG Cold War by being, as 'AnthonyMouse puts it upthread, "the country that has an elected government and constitutional protections for human rights" and "having a stronger economy". It won it by having a stronger economy, which it used to fuck half of the world up, in a low-touch dance with the Soviets that had both sides toppling democratic governments, funding warlords and dictatorships, and generally doing the opposite of protecting human rights. And at least through a part of that period, if an American citizen disagreed, or urged restraint and civility and democracy, they were branded a commie mutant spy traitor.
My point here isn't to pass judgement on the USA (and to be clear, I doubt things would've been better if the US let Soviets take the lead). Rather, it's that when we're painting the current situation as the next Cold War, then I think people have a kind of cognitive dissonance here. The US won the OG Cold War by becoming a monster, and not pulling any punches. It didn't have long discussions about how to safely develop new technologies - it just went full steam ahead, showered R&D groups with money, while sending more specialists to fuck up another country to keep the enemy distracted. This wasn't an era known for reasoned approach to progress - this was the era known for designing nuclear ramjets with zero shielding, meant to zip around the enemy land, irradiating villages and rivers and cities as they fly by, because fuck the enemy that's why.
I mean, if it is to happen, it'll happen. But let's not pretend you can keep Pax Americana by keeping your hands clean and being a nice democratic state. Or that whether being more or less serious about AI safety is relevant here. If it becomes a Cold War, both sides will just pull all the stops and rush full-steam to develop and weaponize AGI.
--
EDIT - an aside:
If the history of both sides' space programs is any indication, I wouldn't be surprised to see the US building a world-threatening AGI out of GPT-4 and some duct tape.
Take for example US spy satellites - say, the 1960s CORONA program. Less than a decade after Sputnik, no computers, with engineering fields like control theory being still under development - but they successfully pulled off a program that involved putting analog cameras in space on weird orbits, which would make ridiculously high-detail photos of enemy land, and then deorbit the film canisters, so they could be captured mid-air by a jet plane carrying a long stick. If I didn't know better, I'd say we don't have the technology today to make this work. The US did it in the 1960s, because it turns out you can do surprisingly much with surprisingly little, if you give creative people infinite budget, motivate them with basic "it's us vs. them" story, and order them to win you the war.
As impressive as such feats were (and there were plenty more), I don't think we want to have the same level of focus and dedication applied to AI - if that's a possibility, then I fear we've crossed the X-risk threshold already with the "safe" models we have now.
Here's the thing: the US didn't win the OG Cold War by being, as 'AnthonyMouse puts it upthread, "the country that has an elected government and constitutional protections for human rights" and "having a stronger economy". It won it by having a stronger economy, which it used to fuck half of the world up, in a low-touch dance with the Soviets that had both sides toppling democratic governments, funding warlords and dictatorships, and generally doing the opposite of protecting human rights. And at least through a part of that period, if an American citizen disagreed, or urged restraint and civility and democracy, they were branded a commie mutant spy traitor.
My point here isn't to pass judgement on the USA (and to be clear, I doubt things would've been better if the US let Soviets take the lead). Rather, it's that when we're painting the current situation as the next Cold War, then I think people have a kind of cognitive dissonance here. The US won the OG Cold War by becoming a monster, and not pulling any punches. It didn't have long discussions about how to safely develop new technologies - it just went full steam ahead, showered R&D groups with money, while sending more specialists to fuck up another country to keep the enemy distracted. This wasn't an era known for reasoned approach to progress - this was the era known for designing nuclear ramjets with zero shielding, meant to zip around the enemy land, irradiating villages and rivers and cities as they fly by, because fuck the enemy that's why.
I mean, if it is to happen, it'll happen. But let's not pretend you can keep Pax Americana by keeping your hands clean and being a nice democratic state. Or that whether being more or less serious about AI safety is relevant here. If it becomes a Cold War, both sides will just pull all the stops and rush full-steam to develop and weaponize AGI.
--
EDIT - an aside:
If the history of both sides' space programs is any indication, I wouldn't be surprised to see the US building a world-threatening AGI out of GPT-4 and some duct tape.
Take for example US spy satellites - say, the 1960s CORONA program. Less than a decade after Sputnik, no computers, with engineering fields like control theory being still under development - but they successfully pulled off a program that involved putting analog cameras in space on weird orbits, which would make ridiculously high-detail photos of enemy land, and then deorbit the film canisters, so they could be captured mid-air by a jet plane carrying a long stick. If I didn't know better, I'd say we don't have the technology today to make this work. The US did it in the 1960s, because it turns out you can do surprisingly much with surprisingly little, if you give creative people infinite budget, motivate them with basic "it's us vs. them" story, and order them to win you the war.
As impressive as such feats were (and there were plenty more), I don't think we want to have the same level of focus and dedication applied to AI - if that's a possibility, then I fear we've crossed the X-risk threshold already with the "safe" models we have now.