I just finished listening to it on audible. It is certainly thought provoking, but full of contradictions as others have mentioned. Namely that this technology cannot be contained, and yet that it must be contained is pretty doom and gloom. The prognostications about artificial intelligence are hardly as scary as the ones made around genetic sequencing — that you can buy a device for 30k that will print pathogens and viruses for you out of your garage. That’s some scary stuff.
You can buy plasmids and make whatever bacteria you want for a few decades now. AI may help, but it certainly doesn't cost $30k to cause mischief.Pretty sure I learned that in Bio 102
I just want to echo this here and in a bit different wording: AI will provide step-by-step guides on how to make viruses that just about any idiot can follow, for very cheap, and in under a year time frame.
I really really hope I'm missing something big here.
Having a step-by-step guide and actually being able to follow it are two very different things. If you follow YouTube channels like The Thought Emporium you'll see how hard it is just to duplicate existing lab results from published sources in biology. To go a step further and create new dangerous things without also getting yourself killed in the process is a pretty tall order.
We should be talking about the more abstract problem of asymmetric defense and offense.
Imagine that nukes were easy to make with household items. That would be a scenario where offense is easy but defense is hard. And we would not exist as a species anymore.
Once a hypothetical technology like this is discovered one time, it's not possible to put the genie back in the bottle without extreme levels of surveillance.
We got lucky that nukes were hard to make. We had no idea that would be the case before nuclear physics was discovered, but we played Russian Roulette and survived.