Are you excited that an AI could, in the future you describe, spit out correct instructions for creating a more-dangerous virus than COVID to anyone who asks?
People seem to fundamentally misunderstand the problem space of AI.
I assume that you are implying that AI will be able to "figure out" how to synthesize a virus, because something like GPT4 sure as shit not going to be trained on materials on how to specifically synthesize viral weapons.
That "figure out" part is where you make a whole shitload of assumptions, one of which is that P=NP.
Yeah - that's not how that works I believe. Some problems are harder than others, and the optimal virus it could produce could take orders or magnitude more time/computation.(edit:to produce an effective antiviral)
Also, imagine any one of the billionaires buying all the computing power they can to do something nefarious?
Or the amount of computing power the US could use to produce targetted bioweapons? How could the public compete?
That's without imagining that they could worm(I believe it's been a little bit) most peoples devices and extract some computing power from that.
That's what you believe but it's not necessarily correct. You assume asymmetry in favor of attacker, but this patently does not apply to e.g. cryptography; the way it's going, we would get more, not less security out of AIs, by automating testing and audits and formal proofs. And, importantly, defense is a common good; best practices could be easily spread, and applied in an economical way with AI, whereas attackers work on their own.
Many functions are asymmetrical in favor of defense. Viruses, too, are not magic; the more sophisticated and powerful its mechanism of action, the longer its code has to be, the worse it is at spreading and surviving the elements (consider how fragile HIV is). Viruses are already tremendously optimized by selection, due to very quickly replication and constant pressure of immunity and medicine. You'd think COVID is merely a warning, but mechanistically it's probably very close to the strongest attack feasible with our biology. Not the most virulent by a long shot; but very good at overcoming our generic defenses.
Crucially it wasn't created with AI. Without any AI, we know perfectly well how to make super-COVIDs, it's limited by accessibility of hardware for microbiological research, not compute or algorithms.
Rapidly designing antivirals, on the other hand, does benefit from AI.
You display a powerful negativity bias which is pervasive in such conversations. You completely ignore AI as a force for good and consider it as, essentially, an offensive capabilty, from which it follows that it must be handed over to incumbents (I take issue with this logic, of course). But that's a self-fulfilling prophecy. Any advantage, centralized enough, becomes an instrument of oppression.
Could you describe my strong negative bias? I have worries that come to mind - just like people were worried that the atom bomb would burn the atmosphere - and I think they are fair.
I have a hard time understanding your point - not a jab, genuinely- I agree with your last point, where any advantage being centralized becomes an instrument of oppression, and that's mainly where my issue with it lies.
I'm not a doomer at all, I'm personally not afraid of AI. I'm just extending the logic of the previous commenter.
AI could overcomes a lot of problems, for a lot of people. Talking out of my ass, but say Jeff Bezos wants to start a lab to make super-covid or whatnot, and his hurdle is having access to restricted hardware - how hard is it to get the AI to design the hardware?
Regulation of anything becomes basically impossible - and I think that's enough of a worry in itself. (Edit: to clarify, abssence of regulation brings us back to your final point - centralized power leads to oppression. Regulation is supposed to make power less centralized, other than for the common good (yeah yeah I know), so removal or regulation means untethered power for the already powerful.)
> As long as the AI (that anyone can access) can also spit out an equally powerful antiviral.
You:
> Yeah - that's not how that works I believe. Some problems are harder than others, and the optimal virus it could produce could take orders or magnitude more time/computation.(edit:to produce an effective antiviral)
With «not how that works» you, I think, implied that there's no reason to expect that proliferation of AI could offset (or indeed decrease) the risk from AI accelerating GoF research. Admittedly I'm not sure specifically about someone's local model designing an antiviral to a new pandemic, that'd certainly happen first on an institutional cluster. But local systems can still assist with e.g. monitoring environment data for new DNA signatures and reporting curious finds.
Anyway, I understood this, in conjunction with other risks you pitched, as a general issue of AI capabilities not offsetting AI risks. I believe this needs better arguments, because many real-world scenarios seem advantageous to the defending side, even when it's "weaker" in terms of resources, and disadvantageous to attacks, which run against natural constraints. An AI filter can see past clever attempts to hide signatures of a spam message (perhaps well enough that passing spam will just stop looking like anything a human would write or read and will be detected by simple heuristics). An AI-fortified firewall will be vastly more reliable than anything we've got now, possibly strong enough to ward off superintelligent attackers. An AI biomed assistant can design vaccines and medicines against entire classes of pathogens, in a way that cannot be overcome just by generating more variants in a wet lab. This is not wishful thinking – it's a very real question. People often fear that AI proliferation as something like everyone getting tabletop nukes, and I think this is an entirely wrong analogy, because it's impossible for physical reasons to build nuclear-powered shields or something; but in the realm of resource-constrained intelligence «that's not how it works».
> and his hurdle is having access to restricted hardware - how hard is it to get the AI to design the hardware?
Pretty hard. But more importantly, everyone interested knows the designs. It's just capital-intensive to the point of impossibility, you need a lot of precision and high-purity materials, so you're forced to buy from established vendors. People tend to overestimate the importance of secrecy in keeping the world livable; I think it's largely a result of propaganda by state security, which is constitutionally biased towards this line of thinking.
In the limit of this logic with AI helping design some precursor to a threat, you'd be just left arguing that AI can make civilization so efficient, any crackpot wannabe comic villain will be able to hide a full supply chain, from mining raw minerals to microchips and bioweapons, on his Texan ranch. Some people bite that bullet, and sure, I think that is doable. But are you sure that a civilization of such logistical prowess would be anything like our one? That it would still be vulnerable to crackpots spreading COVID? That it wouldn't just crank out, say, a few billion UV air purifiers for good measure, because that'd be cheaper than checking?
Be that as it may, I'll pick the prospect of that civilization over the current one, to say nothing of stagnation AI-risk hall monitors want to impose.
However AI can give you an advantage, governements and millionaires will have further access to it. And asymetric advantages are not exclusive to the "good side", as I'm sure you can imagine.
I'm not sure about your thing about nuclear powered shield. What are you talking about?
And about your tangent on the supply chain - I doubt Jeff Bezos has issues getting his hands on anything really- including the materials needed to make one lab? The guy makes rockets, how hard is it to hide enough material for a single building? And you have an AI to ask for help - the only safeguards we've put on as a society is regulation, and this is putting that in jeopardy to my understanding
Yes, strong actors will have further access to AI, just as they have to everything else. I believe that on net, scaling properties in this domain are such that proliferation of AI democratizes the world rather than the other way around. The core advantage of strong actors is being able to employ capable (smart) people, after all, and AI diminishes that edge.
> I doubt Jeff Bezos has issues getting his hands on anything really- including the materials needed to make one lab?
Precisely. If he wanted to kill us all with super-Covid, he probably would have pulled it off. Which is my point: it's not the lack of AI that prevents this scenario.
If you are scientifically-minded, I think you should consider how the second law of thermodynamics makes problems for your hope/assumption that AI can generate with equiprobability both good and bad outcomes.
If you are monotheistically-minded, consider "Satan's ratchet": It's always easier to lie, kill and destroy than to disseminate truth, raise something from the dead, and build.
P.S. I just made up this bit about Satan's ratchet but I think it has a nice ring to it.
Who says there is an antiviral for every virus? You can't go doing something because you assume there is a solution to the problem you create - that's irresponsible and if you think that you should be denied all access to modern tech/society.
Who says there exists a way out of the regulatory-authoritarian attractor for AI?
Who could've known that nuclear energy is a far lesser threat to humanity than climate change from burning fossils? Certainly not the legions of activists and media producers, who installed the image of green mutagenic goo in people's minds.
Just because you do not even conceive of some risk or don't take it seriously doesn't mean you get to play the Responsible Adult In The Room by pontificating of risks of things people do.