I posted this in part because it explains why I think attempts to regulate AI (including the recently-proposed moratorium) have a decent chance of working. They won't prevent dangerous misuse of AI technology altogether, but they can substantially reduce the risks, much like with bioweapons.
The incentives are different. Bioweapons are mainly destructive and not useful for achieving any aims that are sane (they don't secure energy, they don't increase local prosperity, they don't win land wars, they don't target political opponents, etc). Anyone who defects from banning them gains a target on their back and that is it. I can't think of a use for bioweapons.
Contrast to AI. Literally anyone who gets involved, from a teen upwards gains magical new powers across a range of arts and sciences as well as gaining new insights into any questions they have. There is a vague promise of utopia where it no longer makes sense for humans to work (probably won't work out that way, but whatever). Defectors gain massive advantages and can also maintain plausible deniability.
I can't speak for parent, but there is a point at which I just stop caring. Either thinga will get bad enough and people will revolt like they tend to do when its too late or they won't. Nihilism is one hell of a drug.
Separately, on a personal level, I find social justice annoying.
But then I have coffee and play with my kid and no longer wish to destroy the world.
I don't know how much "hidden" knowledge there is in AI research. In biotech stuff, those can make or break an experiment easily and there is just so many of them and they are all trivial when known but absolutely critical. Some of them actually has zero reason to exist but they still exist, similar to a block of legacy code that was commented as "do not remove, shit breaks when removed". Nobody knows why and everybody just follow it religiously.
So the same approach may or may not work for AI. Forcing stuff to go underground in AI research may not hinder them that much...
The issue is that most "underground" groups do not have access to top researchers or experienced engineers; those tend to be public figures, or people who want to become public figures. Furthermore, complete secrecy requires tightly restricting communication in a way that makes collaboration even within an organization harder; note how terrorist groups have to operate in isolated "cells."
I'm sure that anyone could replicate GPT-4, but creating radical new advances would be very hard to do in secret.
Furthermore, while many such groups (particularly state actors) have plenty of funding, for other groups it will take a great deal of time and effort to raise or steal the necessary cash in secret, potentially making the costs outweigh the the benefits. If you make a profit, you'll also need to launder the resulting profits, which reasonable crypto regulations would make harder.
Ultimately, nobody can entirely stop "underground" AI. If you think AI will bring about the singularity and destroy mankind, laws cannot stop it. But if your worries are more pedestrian, merely reducing the prevalence of such systems is more than enough.
All those barriers and more apply to nuclear weapons. Prior to the Manhattan Project, nobody was even sure if nuclear energy could be sufficiently weaponized.
The biggest barrier to bio weapons is, I think motivation. Bio weapons are hard to target effectively and there is a chance of blowback. In addition, unlike nuclear weapons, they are not as useful against military targets. A nuclear bomb can annihilate a tank column. However, every major military has the capability of offering bio weapons protection to its soldiers (for example filtration systems in tanks).
In addition, because of incubation, etc, the country you are targeting always has the time for retaliation, if not with bio weapons then with nuclear weapons.
Thus it is not clear what bio weapons buy a country that nuclear weapons don’t.
One also should take the ethical consideration of deploying a bio weapon into consideration. The world has moved beyond realpolitik and the age old "might makes right" attitude, to use biological warfare in an unethical manner would result in condemnation from the international community.
I so wish that to be true, but if Ukraine proved anything, it is exactly that might is right is alive is well ( for example, Germany only buckled after nordstream blew up or India just saying no to US restrictions on gas purchases ).
We need utility-scale molecular sensing.
How will we prevent bioterrorism when it gets much easier to make super-pathogens? How much freedom will we have to restrict?
These questions are answered and/or mooted if enough people have solid-state, label-free, universal molecular sensing at home.