Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Deliberately optimizing for harm (science.org)
445 points by herodotus on March 16, 2022 | hide | past | favorite | 279 comments


The "dual use" paper this is commenting on is the clickbait equivalent of "encryption is for pedos", and maybe Derek's "not too surprising" is code for "Science editors are not discerning enough".

Like, this is the whole point of pharmacology: Predicting the biological interactions of chemicals (what they do to biological targets, how potent), and their ancillary physical properties (solubility, volatility, stability etc). For example,

Optimizing for mu-opioid agonist activity gives you super potent painkillers, drugs of abuse, and that stuff Russia gassed a theater with to knock out / kill hostages and kidnappers (i.e. fentanyl analogues)

Optimizing for inhibition of various proteases might give you chemotherapy drugs with nasty side effects, or stuff with nasty side effects and no known therapeutic use (i.e. ricin)

Optimizing for acetylcholinesterase inhibitor activity will turn up nasty poisons which could be purposed as "nerve agents" or "pesticides"

Optimizing for 5HT2a activity will give compounds that are great for mapping receptor locations in brains, which are also drugs of abuse, and which are also lethal to humans in small doses.

And the "predicted compounds not included in the training set" thing is just table stakes for any predictive model!


> 5HT2a

You sure you don’t mean 5HT2b?

I mean, anything can be toxic with enough dose, but the b-subtype agonists seem a lot more toxic than the a-subtype agonists.

(Fun fact: 6-APB, a “research chemical” recreational substance became an actual research chemical because of it had better 5HT2b selectivity than what was previously used in lab research)


Was thinking of halogenated NBOMe series - observed in humans to have a pretty narrow therapeutic index re: death, cheapish synth, can be vaporized

But yeah 2b could be worse. Or many other targets as well

Funny thing, optimizing "research chemicals" for (1) uncontrolled synthetic pathway and (2) potency is common to Institutional, Druggie, and Terrorist researchers. None of them want to go through the bureaucracy for controlled substances and potency is good for [better controlled experiments / smaller quantities to transport / more killing power]


researchers will really care about selectivity, but potency can help with the amount of paperwork for sure (and cut synth costs!)


Fortunately we don't see any real work in the chemical and biological weapons space anymore. While it would still be pretty handy for terrorist groups, in actual warfare chemical weapons aren't super useful. See https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch... .


On a tangent: it occurred to me recently that we also don't see much use of ICBMs with non-nuclear payloads, despite these being a fairly-obvious "dominant strategy" for warfare — and one that isn't banned by any global treaties.

I'm guessing the problem with these is that, in practice, a country can't use any weapons system that could potentially be used to "safely" deliver a nuclear payload (i.e. to deliver one far-enough away that the attacking country would not, itself, be affected by the fallout) without other countries' anti-nuke defenses activating. After all, you could always say you're shooting ICBMs full of regular explosive payloads, but then slip a nuke in. There is no honor in realpolitik.

So, because of this game-theoretic equilibrium, any use of the stratosphere for ballistic weapons delivery is effectively forbidden — even though nobody's explicitly asking for it to be.

It's interesting to consider how much scarier war could be right now, if we hadn't invented nuclear weapons... random missiles just dropping down from the sky for precision strikes, in countries whose borders have never even been penetrated.


> despite these being a fairly-obvious "dominant strategy" for warfare

I don't think these are quite as viable as you think. ICBMs are expensive. Probably tens of millions of dollars each, for a single-use item. Cruise missiles cost $1-$2 million to deliver the same payload and have a better chance of surprising the enemy.

ICBMs have longer range, but how often do you need to strike targets more than 1000km past the front line? They're inherently strategic weapons.


Even cruise missiles are horribly expensive for conventional munitions payloads. Cruise missiles were developed to deliver nukes.

An unguided 1000kg "dumb" bomb costs $2,000. A "smart bomb" costs $20,000 to $100,000. A cruise missile costs $1mil to $2mil.

In the scope of any protracted real war, sending out lots of cruise missiles is horribly inefficient. Much much cheaper to send out a few planes to drop 100's of tons of dumb or smart bombs. IOW, you can deliver 10x to 100x more boom if you just use planes and bombs. 1000x more if you use long range artillery. But then, the pilots or soldiers are at risk-- and that is a political calculation.


about cruise missiles, wasn't there one of those DARPA contests to see if a guy in the garage could produce a cruise missile? IIRC it got quite scary and was cancelled or something. Being in the drone and high power rocketry hobby i have absolutely no doubt there's enough knowledge and electronics availability for a guy in their garage to come up with something that delivers 50lbs to a precise gps coordinate a few hundred miles away for less than $10k. Once you do that, it's easy to scale up to 500lbs


I'm skeptical you could hit that sort of range for anywhere near $10K. Forget the electronics, you need an engine powerful enough to lift a few hundred pounds for that distance. Unless you want it detected immediately by early warning radar you need it to fly at a low altitude, like a hundred feet or less. Unless you want it to take forever and be susceptible to infantry with small arms it needs to be traveling fast, in the hundreds of miles an hour. That's simply not possible with an electric system with today's technology, and a rocket engine won't provide the endurance or efficiency you need. That leaves a jet engine or pistol engine. Plus, flying at that speed and altitude means you need an effective autopilot system that uses terrain-following radar. You'll also need some nice guidance packages that allow the shooter to set multiple waypoints, so the missile doesn't have to just fly a direct course. And a 50 lb payload of high explosive just isn't that helpful. There aren't a ton of targets where you only need 50lb of explosives to defeat them, that are also going to stay in the same exact GPS position long enough for your missile to travel a few hundred miles. So you'll want a different terminal guidance method, either some sort of radar sensor or infrared.

I don't think you could get an engine capable of getting you hundreds of miles at that speed and altitude, much less the sensors and guidance system.


A lot of the cost in defense contracts is paying for guaranteed domestic supply in the event of a war. If you buy COTS parts and outsource your machining I see $10k as very achievable, the majority of that being metal and explosives. The electronics are nearly trivial, but some of the IR tracking is embargoed and hard to get outside the US.


Find me an engine capable of going a couple hundred miles an hour for less than $10K though


Hydro-formed "Escopette"-style valveless pulsejet? They run as efficiently as a turbojet (w.r.t. specific fuel consumption), and Mach 0.7 is way beyond your "couple hundred miles an hour".

It will be burnt up by the time it runs out of fuel, though.

For a good one it has to be hydro-formed out of seamless pipe, which is likely hot-stretched (like wire drawing, but using an induction heating coil in place of the die, and independently controlling the feed-rate and the pull-rate) in advance to retain consistent wall thickness in the engine despite being cross-section after hydro-forming. Also probably incremental hydro-forming with grain-structure-fixing re-heating between stages.

The hard part is just that most of the work is in making the hydro-forming dies/tools, not then using them to cheaply produce more engines.


Fascinating, haven't heard of this. Some googling suggested there is some active research in this area, and at least some active use by hobbyists and by the military in target drones. Seems like a promising angle for a cheaper cruise missile, at least on the engine front.


They are loud, though, because they use ~100 Hz wave compression for their combustion, instead of continuous-flow as with a non-pulse jet.


A solid rocket engine. It's not hard. Did you mean missiles or drones?


You're going to have serious troubles implementing a cruise missile with a range in the hundreds of miles with a solid rocket engine. While most cruise missiles use a solid booster to launch the missile and get it up to the proper speed and altitude, you need an engine to sustain flight. You also need a propulsion mechanism that allows you to control thrust, which a solid engine wouldn't. To my knowledge there is no cruise missile in existence that uses just a solid rocket engine. In theory you could get into the hundreds of miles with just a booster if you use it to get up to a significant altitude and glide the rest of the way. But that largely defeats the purpose of a cruise missile: traveling at low altitude to avoid enemy radar.

Of course, you can definitely make a BALLISTIC missile with a couple hundred mile range using a solid rocket engine, and probably for fairly cheap (especially if you don't particularly care about accuracy)


> BALLISTIC missile

Also: Scud missiles are comparable, made outside western cost inflation, and still ~$1M each. So I'm guessing you can't get a ballistic missile anywhere near $10k (at least, not one that isn't more likely to kill its owner).


Yeah, I got confused. Those are harder to reduce cost on. I had this conversation previously about javelins, etc.


Well, unscrupulous sellers find a way even around embargoes...

https://disclose.ngo/en/article/war-in-ukraine-how-france-de...

(An order of magnitude larger helicarrier contract was cancelled eventually.)


V1 was 160 miles, 1800lb warhead? You saying a bunch of nerds in 2022 can't beat 1940's tech to bring the price down a bit?


I think they can bring the price down a bit, but not to $10K, even with a smaller size payload. Also I should mention the V1 was barely accurate enough to target a city the size of London. Not that adding modern GPS guidance would necessarily cost that much, of course.


Gliders though.


So you use a solid rocket booster or maybe make it air-launched and have it glide to the target? So that is definitely a thing, an example I can think of is the US small diameter bomb (https://en.wikipedia.org/wiki/GBU-53/B_StormBreaker ) or the JSOW (https://en.wikipedia.org/wiki/AGM-154_Joint_Standoff_Weapon ). Though those aren't in the "hundreds of miles" that OP originally proposed, they are in the 50-75 mile range. Those also technically aren't cruise missiles, and don't have the primary advantage of a cruise missile: the ability to travel at low altitudes to evade enemy radars and air defenses.

That said, you could probably make a glide bomb for pretty cheap. The two examples I gave above are in the hundreds of thousands, but I bet if you sacrificed some of the accuracy and payload, and you were really optimizing for cost, you could get that into the tens of thousands


There was a man in New Zealand trying to make a very low-cost DIY cruise missile as a hobby project [0]. iirc, he was using a pulsejet engine, but ended up getting shut down by some visits from stern-looking government agents.

0: https://www.theguardian.com/world/2003/jun/04/terrorism.davi...


> wasn't there one of those DARPA contests to see if a guy in the garage could produce a cruise missile? IIRC it got quite scary and was cancelled or something.

This sounds similar to the Nth Country Experiment, conducted by the US in the 1960s. The idea was to see if three freshly minted physics PhDs could design a working nuclear weapon using only publicly available information. In less than 3 years, the three succeeded in creating a credible design for an implosion-style bomb.

https://en.wikipedia.org/wiki/Nth_Country_Experiment


> ICBMs are expensive. Probably tens of millions of dollars each

For sub-launched ICBMs (like the UK's nuclear deterrent) you also need to factor in the through-life costs of the launcher platform, and the fact that once it starts launching, it has given itself away. We only have four subs, not all of which are on patrol, so it would be barking to compromise these to deliver a conventional payload.


Submarine launched ICBMs probably wouldn't be valuable without nuclear weapons though: they're valuable because they're a guaranteed second-strike capability to a nuclear first strike - you might plausibly get the land and air launchers, but there's no way to be sure you got all the subs and you only have to miss 1 (i.e. no country would survive losing 28 major cities).

Without nuclear weapons, they're kind of pointless - you don't inflict enough damage to warrant the expense. But the first strike is also so much more survivable.


Which country is "we"? The US has 14 Ohio class SSBNs in service until at least 2029



Yes, the UK (I should have made this more explicit).

The UK's current deterrent force is currently expected to be replaced by the successor Dreadnought class [0] in the 2030s. They are currently projected to cost £31 billion (likely an underestimate) for four subs, each of which can carry 8 missiles max. Again, these are a horribly expensive way to deliver conventional explosives when we have cruise missiles instead.

[0] https://en.wikipedia.org/wiki/Dreadnought-class_submarine


> ICBMs have longer range, but how often do you need to strike targets more than 1000km past the front line?

Don't you need to also consider the vast expense countries (okay, mostly just the United States) spend to essentially extend their "front line" well beyond their own borders?


Well the first year of the Iraq War cost the US $54 billion, according to congress's budget[0]. This doesn't include the total cost of the supporting infrastructure need to be able to deploy troops in Iraq quickly, but we can estimate that using the increase in defence budget from 2002-3, or $94 billion ($132B in 2020)[1].

According to Wikipedia, Minuteman III ICBMs have a 2020 unit cost of $20 million[2], so for the cost of an Iraq invasion, the US could have fired about 6600 missiles. Considering the invasion toppled the Iraqi government, it's pretty unlikely that firing 6600 missiles with conventional payloads would have been anywhere near as effective.

[0]: https://en.wikipedia.org/wiki/Financial_cost_of_the_Iraq_War...

[1]: https://en.wikipedia.org/wiki/Military_budget_of_the_United_...

[2]: https://en.wikipedia.org/wiki/LGM-30_Minuteman#Counterforce


The comparison we're making is whether precision attacks, presumably on roughly building-sized targets, would be cheaper to do from long range via ICBMs (with conventional warheads), or via much cheaper but shorter-range missiles. My guess is that neither ICBMs nor shorter-range missiles could have accomplished what the U.S. military accomplished in Iraq. Presumably missiles alone were responsible for a small portion of that $54 billion.


If I can trust https://en.wikipedia.org/wiki/LGM-30_Minuteman a Minuteman III (which is the current ICBM design used by the US) will land within 800ft (240m) of its intended target 50% of the time. And outside that circle the other 50%.

In other words, you can't really target a "building-size" target with these (with maybe exceptions like the Pentagon).

For nuclear payloads, a few hundred meters of error is much less of an issue, of course.


In the first Iraq war, "surgical strike" was euphemism for undiscriminate carpet bombing. Was the second Iraq war any different ?


In a world without nukes, we would've been all about precision targeted ICBM munitions. In a war they would've been used to hit government centers, military bases, staging and supply areas, oil refineries and wells etc.

It's an infinite range weapon which arrives in 20 minutes to kill a target your spy satellites see. They would be used all the time.


Conventional Prompt Global Strike is intended to provide the ability to deliver a conventional kinetic attack anywhere in the world within an hour. It has been an active area of weapons research for the US for 20 years. As you speculate, misinterpretation of the launch is a concern. [1]

As opensocket points out, there are many shorter range conventional weapons used across borders. The cruise missiles of gulf war 1 or the drones of the post September 11 world.

[1] https://sgp.fas.org/crs/nuke/R41464.pdf


To tie this in with current events this is exactly what makes the no-fly zone idea in Ukraine so dangerous. All of the things you have to do to establish a no-fly zone and take away the enemy's ability to fire into and effect your no-fly zone look the same as a prelude to an invasion or nuclear first strike. This is made worse by the fact that many of the weapons systems you would be using are dual use. Meaning they were designed to deliver conventional or nuclear weapons. It's a massive gamble that the actions won't be misinterpreted or used to justify moving up the escalation ladder.


There are quite a lot of shorter-range conventional systems. In practice you don't need that inter-continental range for most purposes. For some modern examples you have the Chinese DF-21 and the Russian Iskander system. And a lot of those systems are dual-use: capable of delivering both nuclear and conventional payloads. It's not totally clear what that will mean in a conflict between two nuclear powers. What do you do when early warning radar picks up a ballistic missile coming in when you can't tell if it is nuclear or conventional? Plus this isn't a video game, you won't hear some alarm going off after it detonates indicating it was a nuclear explosion. You'll need to send someone to do a damage assessment, and that takes time.


We have satellites which can detect a double flash (characteristic of a nuclear explosion), the US and probably most other nuclear powers with the exception of perhaps North Korea and Pakistan would know instantly of any nuclear detonation above ground.


Not to mention the net of seismographs across the US. Those would tell us within our own borders if a nuclear detonation has occurred within seconds of impact.


We have seismographs and other measurement & signature intelligence collection means all over the world, and there's an elite unit in the US Air Force dedicated to detecting and localizing nuclear detonations no matter where they occur (as well as event attribution, nuclear forensics, intelligence gathering, etc.).

https://www.airforcetimes.com/news/your-air-force/2019/04/21...

https://en.wikipedia.org/wiki/Air_Force_Technical_Applicatio...


> It's interesting to consider how much scarier war could be right now, if we hadn't invented nuclear weapons... random missiles just dropping down from the sky for precision strikes, in countries whose borders have never even been penetrated.

Why are cruise missiles any less scary? They are indeed used in precision strikes across country borders, and can kill you just the same. The existence of nuclear weapons still allows some countries to use cruise missiles, as we see happen almost every year.


I'd consider how much more docile the nation's of the world would have become sans nuke.

If the possibility of an untraceable, space borne, hypersonic weapon was on the table we might have had a better deterrent than nuclear weapons. The lack of fallout and total deniability makes it almost certain they would have been deployed and quite concisely ended a few conflicts at the onset.

It is alarmingly frightening, moreso even, because the impact could be extremely precise- leaving infrastructure intact.


> I'd consider how much more docile the nation's of the world would have become sans nuke.

Interesting. I expected that nuklear weapons made us more docile. It's a huge deterrence for big powers to go to war with each other. I think we are seeing this play out in Ukraine right now. If Russia had no nuclear weapons, I'd expect NATO to have intervened much more directly at this point, especially after seeing that Russia seems much weaker than expected.


That is precisely why Putin keeps saber rattling about Russia's nukes. NATO (but mostly the US) would wipe out the Russian forces in Ukraine in a matter of days. Since he's committed so much of Russia's military to the invasion, the west would effectively castrate Russian defenses and likely all manner of hell would break loose in all those oppressed satellite regimes (hello! Chechenia, Georgia, Belarus, etc.)


The normalization of saber-rattling about nukes is one of the most unsettling outcomes of this whole conflict TBH, and hopefully it's going to be addressed in some way down the line. If every non-nuclear power is suddenly vulnerable to conventional attacks by any rouge state with nukes, the ensuing equilibrium is pretty clear and is not good for overall stability.


>If every non-nuclear power is suddenly vulnerable to conventional attacks by any rouge state with nukes

There's nothing sudden about it, this has been the reality for decades now. We here in the US were on the other side of the matter in Iraq and arguably Vietnam. This is an old truth.


Some in the US even argued for using nuclear weapons on Vietnam, out of frustration with the lack of progress with conventional war.

Imagine how that would have gone -- dropping nukes on the Vietnamese in order to "save" them from Communism.

Thankfully saner minds prevailed.


Yup. Henry Kissinger was a big part of that nonsense, along with a whole bunch of equally sinister stuff. The cluster bombings of Vietnam, Cambodia, and Korea were in many ways directly the result of his machinations.


That reminds me . . . I need to watch Dr. Strangelove again.


Nuclear saber rattling has been the norm for a very long time; it's just that after the fall of the Soviet Union there wasn't much need for it. Things have returned to their more traditional state.


Kim Jong Un would like you to hold his soju.


Putin has been doing it since at least 2014 (along with the more conventional though due to nuclear deterrence more ridiculous threat of "visiting Berlin again").

It's just that until last month it was taken as just posturing to remind the West and the Russians that Russia, as a nuclear power, has to be taken seriously.


in the 80s it was not unusual for armed Russian strategic bombers to cross into US airspace above Alaska and then be escorted back out by US interceptors. I agree nuclear saber rattling is unsettling but it can get much worse than what we're seeing now.

/btw, in other discussions i've been too cavalier throwing around the likelihood of nuclear weapon use in Ukraine. I've thought about it much more since those other threads


I don't feel like anything has really changed in regards to nuclear saber rattling, Biden did so last year in regards to US citizens[0] no less.

[0]https://townhall.com/tipsheet/katiepavlich/2021/06/23/in-gun...


Well yes, but that's just Biden missing the point entirely as usual. The military is sworn to defend the Constitution against all enemies foreign and domestic, so if a mass insurgency is ever needed to counter some future totalitarian government, much of the military will be on that same side. What Putin has been saying is a whole lot more serious than that.


I think he more critically missed that using nuclear weapons on yourself is a massive tactical blunder. Just pointing out this isn't anything new.


Nah. There's no point in putting hypersonic cruise missiles in space. Too expensive, and not survivable. Those weapons will be launched from air, ground, and surface platforms. Magazine depths will be so limited that they'll only be used for the highest priority targets. They won't be enough to end any major conflict by themselves.


> Magazine depths will be so limited that they'll only be used for the highest priority targets. They won't be enough to end any major conflict by themselves.

I'm probably being incredibly naive in saying this, but what about "non-wartime" decapitation strikes — where instead of going to war, you just lob some well-timed hypersonic missiles at your enemy's capitol building / parliament / etc. while all key players are inside; presumably not as a way to leave the enemy nation leaderless, but rather to aid an insurgent faction that favors you to take advantage of the chaos to grab power? I.e., why doesn't the CIA bring ICBMs along to their staged coups?


If you do this, the enemy's nuclear-weapons services will look in the playbook under "what to do if someone kills the government", see, "launch everything as a counterattack", and press the button.

A key advantage of a hypersonic weapon is the possibility of first-strikes to disable the enemy's retaliation systems before they have the ability to launch more-traditional retaliatory responses. Only submarines are likely to be mostly-immune to them.


Sure that type of decapitation strike might be attempted on occasion against weaker countries. The US basically tried to take out Saddam Hussein and his inner circle using precision strikes at the beginning of the 2003 Iraq invasion but they mostly missed.



The Chinese weapon is ground launched, exactly as I stated. Sure you can boost such a weapon up above most of the atmosphere in order to get longer range, but the downside is that higher altitude flight paths make it easier to detect and counter.


AFAIK, there are no publicized counters to partially orbital hypersonic glide weapons in their glide phase due to their maneuverability and speed. Perhaps THAAD - but it may be difficult to ascertain the target when a weapon can glide halfway across the world


Well in theory the RIM-174 (SM-6) has some limited ability to intercept hypersonic glide weapons. Although obviously that's never been tested.

There are counters to hypersonic glide weapons beyond shooting them down. If you can detect it early enough then the target ship can change course and try to evade. The sensors on those missiles have very limited field of view so if it's not receiving a real time target track data link for course correction then it can be possibly be dodged (depending on how many are incoming, weather, and other factors). Even if the target can't evade, a bit of advance warning would at least allow for cueing EW countermeasures.


Nah? You don't put cruise missiles in space. You put mass in space.

I supposed the point would be to hit the highest priority targets and nothing else. Loss of command and logistics has a profound effect on endurance


Putting mass in space as a weapon is just a silly scifi idea disconnected from reality. Even with modern reusable rockets, launch costs are still extremely high, especially if you need enough platforms to hit time sensitive targets. And the platforms wouldn't be survivable. There are cheaper, more effective ways to fulfill the mission.


Not to mention that you get at most as much kinetic energy out as you put in, minus losses to atmospheric drag and what's needed to drop the orbit (you have to burn retrograde to return from orbit).


I don't think you'd necessarily have deniability. We have early warning radars and satellite networks now capable of identifying an ICBM missile launch in the boost phase. Even with only ground-based sensor detecting the missile in the midcourse, it is a ballistic missile, which means the missile follows a predictable trajectory. This can be used to fairly precisely determine what it is aiming at, but also could be used to trace the missile back to a launch site.


Well from space, the point of origin is a bit arbitrary. We could just wait for our satellite to reach enemy territory.

Also, ICBMs in their present form would not likely resemble anything deployed in space


Didn't one of the old presidents propose that with the Star Wars program? Drop some metal rods from space and it's supposed to pewpew targets with minimal fuss?


Shorter range ballistic missiles have been heavily used in multiple conflicts around the world for many years. The US military has been researching the possibility of using conventionally armed long range ballistic missiles to fulfill the prompt global strike mission. Potential target countries have no anti-nuke defenses. But there is a risk that Russia or China could misinterpret a launch as aimed at them.


in WW2, Germany used V2 missiles for indiscriminate bombing of cities (primarily London). I can imagine it would look like that, but worse - and having gone to a few museums that showed Blitzkrieg London, that was bad enough as it is.


blitzkrieg is something else. You're referring to the London Blitz. https://en.wikipedia.org/wiki/The_Blitz was a bombing campaign (airplanes fly over and drop bombs on cities, a very WWII thing to do). V-1 and V-2 sort of came "after" when rocket and guidance tech developed enough that it was practical to target cities using missles from hundreds of miles away (northern france, I think).


Yes you're right, I meant the blitz, and regular bombing did indeed come first. It's been quite a while since I learned about ww2 history!


Agreed about the nastiness of V2 attacks.

However, the existence of nuclear weapons today doesn't seem to have prevented indiscriminate bombing (using whatever weapons: dumb bombs, unguided rockets, cruise missiles) of targets (including cities) in several countries in recent years.


Interesting tangent I hadn’t considered before. However, China is testing some ballistic anti ship missiles. https://www.navalnews.com/naval-news/2021/11/aircraft-carrie...

To be fair, if these become a reality, they would likely strike targets in the Pacific Ocean and South China Seas, far away from the US, but the potential to spook nuclear nations is still there.


i would imagine delivering a conventional warhead with an ICBM has a very high risk of being mistaken for a nuclear armed ICBM. Also, they're expensive. Putting a JDAM package on an old iron bomb turning it into the most advanced precision guided munition is very cost effective.

https://en.wikipedia.org/wiki/Joint_Direct_Attack_Munition


>game-theoretic equilibrium

Equilibriums change, every major US platform was at one point designed to be nuclear capable, i.e. cruise missiles now liberally launched from planes/bombers/ships that are all nuclear capable. There's no a reason nuclear countries who get attacked by any US platforms should assume any incoming ordinance ISN'T nuclear, down to gravity bombs, except for expectation - knowing US has overwhelming conventional capabilities and would rather use it than nukes.

Same will apply as conventional ICBM matures - we haven't seen much of it because ICBMs have not been sufficiently accurate unless carrying nukes where CEP in meters don't matter. For countries with power projection, it was dramatically cheaper to get closer first and deliver less expensive ordinance. Conventional ICBMs seem effectively forbidden because most actors assume they're too inaccurate for anything but nukes and too expensive for anything but nukes.

But that's changing - there are hints that PRC is pursuing rapid global strike i.e. US prompt global strike, because IMO it's the great equalizer in terms of conventional mutually assured destruction precisely because it isn't banned. A lot of articles being seeded on SCMP about PRC hypersonic developments that spells out meter level CEP ICBMs designed to conventionally attack strategic target of depth, aka Prompt Global Strike.

Ergo (IMO) PRC maintaining no first use nuclear policy while conducting massive nuclear build up to setup credible MAD deterrence. This sets up the game theory of accepting that conventional ICBM attacks on homeland from across the globe is possible and that it's best to wait for confirmation unless one wants to trigger nuclear MAD. Entire reason US / USSR and countries that could moved to Nuke triad or survivable nuke subs was because it bought more time than hair trigger / launch on warning posture.

This makes a lot of sense for PRC who doesn't have the carriers, strategic bombers or basing to hit CONUS (or much outside of 2nd island chain). It makes a lot of sense for any nation with enough resources for a ICBM rocket force but not enough for global force projection (basically everyone). World will be very different place if such capabilities proliferate. Imagine any medium size country with ability to hit stationary targets worldwide - fabs, server farms, power stations, carriers being repaired in a drydock.


Hypersonics are a boogieman imo. You'll get one volley before everyone starts rolling out flak or other anti-warhead defenses, and hypersonics have a gigantic weakness in not being able to maneuver for beans. Once you're going over a mile per sec -> predicting where you'll be to fill it with crap to destroy you isn't that hard.

Who cares if you can blow up one target once? Unless you marshal enough to wipeout enough infrastructure to really cripple your opponent, it won't do you much good anyway; and if you do cripple them, and they're nuclear, congratulations; you just won a nuclear response. You now have bigger problems.


>anti-warhead defenses

ABM unreliable, require multiple interceptors per incoming - exchange favors attackers who can always saturate. Also expensive, interceptors usually cost close or more than what they're designed intercept. And currently for anyone not US, just getting US to divert 100s of billions to homeland ABM defense is a win as it will take resources from other priorities. This is also assuming ABM works against hypersonics, including all the recent talk about flak/cloud defense. It's purely speculative.

> blow up one target once

For capital assets that takes billions and years to replace like carriers, subs, supply ships, long range bombers, awacs, critical infra nodes all you need to do is blow it up once. It doesn't take a huge pool of hypersonics to essentially dismantle major US force projection capabilities.

> you just won a nuclear response

Why? Primary motive is deterrence, but with conventional forces. And if it comes down to actually exchanging conventional hits, it will follow escalation ladder, no need to cripple, or cripple to point of existential threat that warrants moving to nuclear.


I’m not sure there’s much difference between the Russians using air-launched cruise missiles (with ranges of hundreds to potentially thousands of kilometers and almost always capable of carrying a nuclear warhead) launched from their Tu-95 Bear strategic bombers (equivalent to the B-52), which Russia has done several times now in Ukraine.


I think the difference is that air launched missiles could carry nuclear payloads but usually don't, while ICMBs could carry non-nuclear payloads but usually don't. All kinds of countries have been using air launched missiles all the time which at least on average tells us that every time one is fired it won't(shouldn't) have a nuclear payload. ICMBs on the other hand have never been used against anyone, and their stated goal for existence is carrying nuclear payloads - so if you see one coming your way you can assume it's a nuke, even though technically it doesn't have to be.


> So, because of this game-theoretic equilibrium, any use of the stratosphere for ballistic weapons delivery is effectively forbidden — even though nobody's explicitly asking for it to be.

Interesting! SpaceX was hoping to one day use Starship for quick intercontinental flights. I wonder if this unspoken rule would make that prohibitive?


Unlikely, as those flights would be scheduled and the launch site publicized. It wouldn't absolutely preclude a masked nuclear strike, but that would be possible already with space launches.


It'd also be a pretty shitty first strike, since you'd be limited to the count of starships scheduled to launch (and likely only ones headed generally in the direction of your target if you really want to mask it) at about the same time. So, probably just one or two, at best. Meanwhile, you'd need at least dozens (of missiles—more warheads) to have any hope of substantially reducing a major nuke-armed opponent's capability to retaliate.

Not remotely worth the complexity of setting up and executing. Maybe worth it against an opponent with extremely limited launch capacity (North Korea?) but that's a pretty niche application.


Until we have some magical non-polluting rocket fuel, I can't imagine intra-planetary rocket trips ever becoming permissible. Planes are bad enough.


Ignoring production, hydrolox would work, not that spacex are going down that road


Oh interesting! Rocket science is cool.


There could also be effective anti-missile defenses, which are currently politically impossible due to the fact that developing them puts you outside of the effectiveness of deterrence.



Chemical and biological weapons are very useful in warfare as a way to demonize one combatant. False or doubtful claims of chemical weapons deployments have an effect on the response of the public and international organizations that is entirely out of scale with the damage that could be inflicted.


Relevant if you want real life cases of military impersonation: https://en.wikipedia.org/wiki/False_flag


In general, a wikipedia link with no additional comment does little to advance a discussion in any direction.

It might be relevant, but it's an extremely low-value comment. A good chunk of the people reading the comment (and caring about it) will already know it's describing false-flag operations.

A good way to think about good HN comments is "is there a specific point I'm trying to make". Anything that doesn't try to articulate a point is likely to be downvoted.


Thank you.


Just doesn't seem like it's worth it even for terrorists.

Why invest a bunch of time and effort making more and more deadly poisons when we've already got a wide variety of them that are cheap to manufacture, well known in how they work, and don't cost a bunch of research money to uncover?


At the risk of putting myself on a watchlist, They already know that, so They have an eye on certain kinds of labware, different precursors, and such. And They already have antidotes to some of these poisons.

One could optimize for compounds with hard-to-monitor precursors. Compounds that can be transported with low vapor pressure and volatility, so they cannot be easily sniffed out.

Or imagine a lethal compound with a high delay factor. Or something with specifically panic-inducing effects, perhaps hemorrhagic with a side effect of your skin sliding off in great slick sheets. Another interesting high delay factor compound might induce psychosis: have fun tracking where these gibbering maniacs were a week ago.

With a sufficiently dark imagination, "needs" could be identified for all sorts of compounds.

Remember, the goal is to throw a monkey wrench into the gearwork of an opposing civilization, not necessarily to kill. Fear of the unknown is very effective for this.


> At the risk of putting myself on a watchlist, They already know that, so They have an eye on certain kinds of labware, different precursors, and such. And They already have antidotes to some of these poisons.

Errm, maybe They Do.

On the other hand, I used to work in an organic chemistry research lab, and at least within my ex university's context, we could basically order anything we wanted from the standard chemical suppliers without anyone batting an eyelid. Pre-signed but otherwise blank order forms were freely handed out, you just filled in what compounds you wanted and handed it over, two days later it arrived and you collected it from Stores.

I personally ordered a compound for a reaction I was planning and it was only after it arrived - when I read the safety data sheet - that I realised just quite how toxic it was.

I backed carefully away from that particular bottle, and left it in the fridge, still sealed. Then found another - safer - way to do the reaction instead...


Right, and if you had ordered 3 barrels of the stuff, you'd get a visit from the feds.


> 3 barrels of the stuff

Barrels? Based on what the LD50 was / what the data sheet said, the bottle I briefly had in my hands - and yes, they did start shaking - would have done for a good proportion of the residents of a small city had it managed to be spread around in a form that would have been ingested.

Chemistry labs are typically well-stocked with quite a lot of fairly unpleasant things. They're also the places where a lot of genuinely amazing and potentially live-saving work gets done!


You can't just tease us like this! If not the actual chemical... maybe an analogue or something? Chemistry is one of my great fascinations lol.


Not GP but one time we did a cyanation reaction (attaches a -CN group) in the pilot plant involving trimethylsilyl cyanide (TMS-CN). We had what looked honest to god like an 8lb propane tank of the stuff ("cyanide and cyanide accessories" lol), with an air-free delivery siphon, typical for air-free, pyrophoric, and toxic reagents.

Normally we'd do this in the high potency suite but the reaction scale was too big so we did it in the conventional GMP pilot plant. Bunny suits and cyanide meters for all three chemists.

I calculated that dose-wise, it was enough to kill tens of thousands, were it directly delivered, however if it were merely vaporized, it would only kill those in the immediate area, and would hydrolyze quickly. There's a great article linked elsewhere in thread about the ineffectiveness of chemical weapons, I've added it for convenience because it's a great read. Air is big, the distance-cubed law really dilutes any toxic agent beyond dangerous limits quite fast (comparing pound for pound lethality of conventional explosives).

https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...


Maybe azides.


Is that toxic bottle still sealed, in the fridge, after you've left the institution? I've had to deal with a few EHS situations like that.


> Is that toxic bottle still sealed, in the fridge, after you've left the institution?

Quite possibly!

The previous occupant of my bench area (and hence adjoining fridge space) left some barely-labeled custom radioactive compounds(!!) in the fridge me to find shortly after I took over that space, so I know how that feels.

After consulting suitably-trained personnel, the contents of the vials were then disposed of ... by pouring down a standard sink, with lots of running water.

Those were the days :eek:


How did you deal with it? I would expect they don't take returns.


The same thing that happens when someone quits.

The people finding these things have the same skill sets and access to the same handling/disposal facilities as the people leaving these things so it's very much a "oh my former coworker forgot to/didn't have an opportunity to dispose of X before departing, I'll just do it myself in the same manner he would have". Furthermore, these people have lives, they go on vacation and cover each other. The institutional knowledge of how to handles dangerous organic things necessarily exists in the institutions that do so.


No bench chemist should attempt to cleanup this stuff. Go to your university or company EHS, and if you don;'t have that, your city does. The history of chemistry is filled with responsible and intelligent organic chemists who nonetheless died terrible deaths. EHS has strategies to avoid this.


Humor us all and think another few steps ahead. And what's EHS gonna do?

They're gonna CC the guy who's office is right beside yours because (surprise surprise) the departments and teams who's work results in them having weird nasty stuff buried in the back of the walk in fridge are the same people who know how to handle it.

EHS is just a coordinator. They don't have subject matter in everything. So they contact the experts. If your biology department fridge with Space AIDS(TM) in it it's because your department is the experts so you'll be getting the call.


Yes, I know how these things work, as my coworkers were those EHS people. The point is that they had training, and they are working within an official university context (laws, etc).


So why not save everyone the week of back and fourth emails while nothing gets done and ask them directly how they want to deal with it rather than putting tons of people on blast and substantially constraining their options by bringing intra-organization politics into the mix?


Sounds like a great way to get Normalization of Deviance[1]. One senior person says "I know how to dispose of this, so it's OK if I don't go through proper channels." Then the next person, following their lead without understanding the implications, says "Joe Senior over there disposed of something scary they found without wasting time going through EHS, so I'll do the same." Maybe it goes fine for a while, but eventually you'll end up with a situation where you've poisoned the groundwater or released dangerous chemicals into the air, because nobody is following the proper channels any more.

1. https://en.wikipedia.org/wiki/Normalization_of_deviance


Telling your boss or relevant colleague instead of going over everyone's heads from the get go isn't normalization of deviance and we both know it.

I really dislike these sorts of "name drop" comments. They're just equivalent of "F" or "the front fell off" with a high enough brow for HN veneer on top.


You're suggesting that people bypass official procedures and/or laws in order to save time. This is a bad path to start down. The fact that you're posting this as a throwaway indicates that you don't want your HN account associated with these proposals.

Here's a relevant software-related analogy:

I work in a situation where if we receive certain types of data, we have to go through proper procedures (including an official incident response team). It would be very easy for me to say "I've verified that nobody accessed this data, and we can just delete it," instead of going through the proper channels, which are VERY annoying and require a bunch of paperwork, possibly meetings, etc.

Maybe nothing bad happens. But next time this happens, one of my junior colleagues remembers that the 'correct' thing to do was what I did (clean it up myself after verifying nobody accessed the data). Except they screwed up and didn't verify that nobody had accessed the data in question - and now we are in legal hot water over a data privacy breach.

And then people go back through the records, and both the junior engineer and I get fired for bypassing the procedures which we've been trained on, all because I wanted to save some time.


>You're suggesting that people bypass official procedures and/or laws in order to save time. This is a bad path to start down.

You are assuming rules say what they mean and mean what they say (and are even written where you're looking, and if they are that they're up to date). If it's your first week on the job, by all means, do the most literal and conservative thing. If it's not, well you should know what your organization actually expects of you, was is expected to be reported and what isn't.

There's a fine line to walk between notifying other departments when they need to be notified and wasting their time with spurious reports.

When maintenance discovers their used oil tank is a hair away from being a big leaking problems they just fix it because they are the guys responsible for the used oil and keeping it contained is part of their job.

Your bio lab or explosives closet isn't special. If the material is within your department's purview then that's the end of it.

Not every bug in production needs to be declared an incident.

>Maybe nothing bad happens. But next time this happens, one of my junior colleagues remembers that the 'correct' thing to do was what I did (clean it up myself after verifying nobody accessed the data). Except they screwed up and didn't verify that nobody had accessed the data in question - and now we are in legal hot water over a data privacy breach.

You can sling hypothetical around all you want but for every dumb anecdote about informal process breaking down and causing stuff to blow up I can come up with another about formal process leaving gaps and things blowing up because everyone thought they had done their bit. It's ultimately going to come down to formal codified process vs informal process. Both work, both don't. At the end of the day you get out what you put in.

>The fact that you're posting this as a throwaway indicates that you don't want your HN account associated with these proposals.

This account is how old? Maybe I just use throwaways because I like it.


It sounds like you may have had a bad time with EHS in the past. I found that by making friends with everybody involved ahead of time, I suddenly had excellent service.

sadly, after 30 years of training to be a superhacker on ML, my greatest value is actually in dealing with intra-organizational politics.


I work in a regulated software space, and my experience is that treating quality and regulatory folks as adversaries is a great way to have your projects take way longer than they should and cause immense frustration. Understanding the hows and whys of the way things work makes life easier for everyone. I haven't worked with EHS in the past, but I imagine it's much the same - if you're seen as somebody who's trying to cut corners and take shortcuts, yeah, you'll probably have a bad time.


Great EH&S people are amazing.

At the root of researchers' reactions to EH&S are two things:

1) EH&S will frequently be the source of unfunded and unresourced mandates. "You must stop your research until you've properly tidied up all of your electrical cords" is, in the short term, an impediment to forward progress. Researchers frequently under-budget for safety/disposal costs when submitting proposals, leaving nobody with money to foot the bill for expediting the resolution for a safety stoppage.

2) The statistical likelihood of a single accident is greater for a large organization than for a single research group. One group can get away with a lethal practice for a hundred years before the first death. A hundred labs will only get away with a similar practice for a year.

If you find a competent and reasonable EH&S auditor, they are a great resource. They may ding you for some safety violations, but they'll be able to point the way toward safer practices. In the best case, even if they don't have mitigation funds for longstanding problems, their voices carry real weight and can expedite the allocation of scarce resources toward real safety concerns.


This is how you wind up spending many, many $ remediating a building. And getting those weird questions like, "Inventory says we have 500ml of X, anyone know where it is?"


> This is how you wind up spending many, many $ remediating a building

Oh yes, and this isn't a new phenomenon, for instance:

"When Cambridge's physicists moved out of the famous Cavendish laboratories in the mid-1970s, they unintentionally left behind a dangerous legacy: a building thoroughly contaminated with mercury. Concern about rising levels of mercury vapour in the air in recent months led university officials to take urine samples from 43 of the social scientists who now have offices in the old Cavendish. The results, announced last week, show that some people have exposure levels comparable to people who work with mercury in industry."[0]

[0] The mercury the physicists left behind https://www.newscientist.com/article/mg12817450-800-the-merc...


You call your university's EHS department and tell them as much about what you know about the contents of the bottle (which may not be what is on the label). They seal off the lab, remove it, and using what they can determine about the contents, destroy it safely.


> I backed carefully away from that particular bottle, and left it in the fridge, still sealed. Then found another - safer - way to do the reaction instead...

I've wondered how manufacturing plants handle this. You back away because you're afraid of touching the stuff - how does a giant factory that produces and ships the stuff handle it?


It clearly can be handled, the question is what's the procedure to handle it correctly and do you trust your procedure? Manufacturer or someone regularly working with this kind of thing does know and trust, if you suddenly realize it wasn't quite what you signed up for backing off is clearly the better choice than trusting your guess at procedure. But risk can be managed a lot.

Although certainly over-confidence can also happen on the other end, e.g. if something that's quite similar to other dangerous things you work with suddenly has an additional trap. And Safety Datasheets are notorious for not necessarily representing actual in-use risks well.


The same way other dangerous stuff is made?

There is plenty of dangerous chemicals made on a huge scale - sulfuric acid, cyanide, explosives, ...


"They" are not proactive, because they know people hiding bad things need time and coordination. So, only taking notice (and notes) and investigating strange patterns is enough.

But also a lot of what the GP says doesn't apply, because on the case of terrorism, "They" is either the police or random people, so "They" definitively do not have antidotes or training on how to handle known poisons.


I've also worked in chemistry research labs and there are certain compounds (in the US) at least that will need approval. Anything on the DEA precursor list will do it. There are certain chemicals that are dual use for chemical weapons and chemotherapy synthesis (things to make melphalan for example). Those required some extra forms to order.


Flourine compound? Organic heavy metal? I'm curious.


>Another interesting high delay factor compound might induce psychosis

This kind of exists already. BZ gas is the well-known delirium-inducing compound with a delay of several hours: (https://en.wikipedia.org/wiki/3-Quinuclidinyl_benzilate#Effe...)

The effects are probably mostly temporary though.


> Or imagine a lethal compound

You'll get nuked (or similar WMD) for that.

Imagine a somewhat more realistic set of applications for hot new research chemicals.

How about aircraft or shells or covert actors spray some "thing" that shorts out electrical insulators 1000x more often than normal. Or makes the vegetation underneath power lines 1000x more flammable than normal vegetation. Our power is unreliable causing a major economic hit both directly and via higher electrical bills. If "they" want to invade now the civilians won't have power and be more likely to get out of the way long before the front line troops arrive. I mean you could probably put nano-particles of graphite in a spray can right now, then stand upwind of a power station or substation, but I bet extensive research would do better. A lot of high power electrical "Stuff" relies on plain old varnish being inert for a long time ... what happens if it wasn't? Again you shut down a country they gonna nuke you, but what if electrical power transformers and switching power supplies only last one year on average instead of ten? Thats a huge economic and maybe military strategic advantage but would you get nuked back because some nation's TVs burn out in one year instead of the carefully value engineered ten years?

How about a spray or microbe or whatever that screws up air filters. Who cares, right? Well most troops (and cops) in most countries have gas masks. Zap their masks via whatever new magical method, then drop simple plain old tear gas the next day or until logistics catches up, which will take awhile assuming they even know they're damaged. Normally when hit with CS, they'd mask up and the CS would have no effect on mask wearers other than reduced vision, but now the side that didn't get their masks ruined has a HUGE tactical advantage.

If you make a bioweapon and kill half the population, they gonna be PISSED and you're gonna get nuked. So try something a little more chill. If your vitamin A reserves are gone, your night vision is temporarily essentially gone. Yeah for long term vit A deficiency you'll get long term skin, general growth, and infection risk problems, but if someone sprayed you with some weird compound that made you pee out all your bodies stores of Vit A before tomorrow morning, the only real short term effect would be night blindness, and that would go away in a couple days with a normal-ish diet or by taking a few days of multivitamin pill or a couple supplement vit A pills. So spray the enemy (and/or the civilians) and they can't see in the dark so magical automatic curfew for the civvies and attack the night blind military and absolutely pound them because they're night blind and can't see your guys. If they have NVGs then hit them at dawn/dusk when the NVGs won't work completely correctly but they can't see without them because of night blindness. Its temporary and never hurt no one other than the opfor "owning the night" until the victims figure it out or naturally recover, so at a strategic / diplomatic level would a country nuke another country because they couldn't see at night for a couple days? Naw probably not. And you can imagine the terror attack / psych warfare potential of leaflets explaining, "we turned off your night vision for a couple days, now obey or we shut off cardiac function next time" Either for the government to use against civilians (think Canada vs truckers) or governments to use against each other (China vs Taiwan invasion or similar). Or give them temporary weird fever sweats or turn their pee robins-egg blue or all kinds of fun.

Now the above is all sci fi stuff I made up and AFAIK I'm not violating any secrets act, unless this post magically disappears in a couple hours LOL.

Think of the new non-lethal battlespace like computer virus attacks. Yeah, we could "EMP" Russia to shut off most of their computers and they'd be really pissed off and nuke us right back so thats a non-starter. But release "windoze annoyance virus number 31597902733" and that could have real world effects. Especially if you release 20,30,4000 new zero-days on the same day.


> How about aircraft or shells or covert actors spray some "thing" that shorts out electrical insulators 1000x more often than normal.

Dropping anti-radar chaff strips is a very good lo-tech way of shorting transformers and power lines. I can't find a link, but IIRC the USAF discovered this accidentally when training missions led to power outages in nearby towns.


The US has a dedicated submunition for this mission called the BLU-114/B. It has been highly effective in past uses, though I imagine cleanup after the conclusion of hostilities was a serious pain. There's been research into even smaller conductors that would get inside fan-cooled equipment, though a.) one might imagine potentially negative health impacts on persons exposed to these materials, and b.) it might be difficult to occupy an area previously hit with these types of munitions given the proliferation of fan-cooled equipment in the military.

https://commons.wikimedia.org/wiki/Category:Graphite_bombs#/...


>Especially if you release 20,30,4000 new zero-days on the same day.

Interesting example of how cyber attacks could blow back. Anything you put in a virus can be taken out and used against you.


Release a highly communicable airborne disease with a low fatality rate, but which noticeable reduces the IQ of a large portion of the infected group.


True, though you have to remember that threat is a social construct and isn't necessarily a rational measure. The 2001 anthrax attacks killed 5 people, injuring 17, and shocked the nation. As a direct result Congress put billions into funding for new vaccines and drugs and bio-terrorism preparedness. If 5 people were killed and 17 wounded in a mass shooting by a terrorist, would we really have reacted as strongly?

If you wanted to install fear into a country, I think being attacked by some custom, previously unknown chemical weapon would scarier than sarin.


from what i understand the delivery of a chemical or biological weapon is the hard part. For most things, you can't just pour it out on the ground to have a huge effect. Somethings you certainly can, weapons grade Anthrax probably just needs a light breeze to devastate a city but something like that is beyond the reach of your average terrorist groups.


Hmmm, and then you have to ask, for whom is it worth it?

Who might invent a bunch of time and effort in those areas?


I guess developments are not published on Nature.


They are amazingly useful in real warfare. Drop nerve gas on a city, walk in a couple days later. WMDs are the only way to really take a country by force, and of all of them, chemical weapons are the most palatable and also the easiest to produce.

Considering this, defense against them is at least mildly important. A proper defense only exists by considering offense, so they're still developing chemical weapons somewhere. The modern hot topic is viruses and other pathogens.


That's not how it works, and they're not very useful at all. The amount of actual product you need is non-trivial and at that point you might as well just use modern conventional munitions.

The reason why it fell out of favour isn't because it's dangerous, it's because it was ineffective outside of TV and film.


That's not how it works? That's all I get? I'd refer you to the site guidelines, barging into a thread and going "NO U" is not a real conversation.

A siege of a city is more impractical than it ever has been. In ancient times a siege was conducted out of necessity; it was the only way to kill everyone inside if a population did not desire subjugation. Complete death of those resisting you was typically the goal, with the slow communication of antiquity leaving any resistance might mean coming back to an army the next time you visit. It was easier to depopulate the region and move your descendants in.

We see echoes of this in modern times. We "took" Kabul at extreme expense, but did not really "take" it as asymmetric enemy forces continued to operate throughout the entire country while the US occupied Afghanistan. Taking many cities across a nation with advanced embedded weaponry is going to be impossible. If it came down to it, such a country would resort to area denial, like Russia did in the Chechnya and Syria, leveling the cities instead of sweeping them.

We don't see people deploying chemical WMDs not because they are too expensive but because of political reasons, and after that, because they don't have them due to disarmament treaties. All it takes is someone deciding they really want to win for all of it to change. You can deny a huge area for weeks with a few chemical warheads. You can make a city inhospitable using less materiel than it'd take to flatten it.


I'd invite you to read the article I linked: https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch... . Generally speaking, if you need to take a city you're better off using high explosives than chemical weapons. It's well researched and cites sources.


And I'd invite you to re-read my comment. He agrees with my main point:

> In static-system vs. static-system warfare. Thus, in Syria – where the Syrian Civil War has been waged as a series of starve-or-surrender urban sieges, a hallmark of static vs. static fighting – you see significant use of chemical weapons, especially as a terror tactic against besieged civilians.

The Russians being in a similar situation because they do not have equipment suitable for a highly mobile army (I don't quite expect them to use them for reasons below, but worth pointing out).

There's a lot wrong with his take. A lot of what he is writing is unsourced conjecture. It's like saying man portable missiles are irrelevant when you can have the CIA topple their government and remove their will to fight.

For one, conventional arms are horribly inefficient at killing in the first place! It's thousands of rounds fired for a confirmed kill, and the stat is equally as bad for artillery. Any marginal improvement is a big deal.

He does not convincingly separate their lack of legitimate use from moral concerns. Developed nuclear states don't use them for a lot of reasons, but a huge issue is that chemical weapons are on the escalation ladder. In the US's case it's also that we don't want to kill indiscriminately. He so much as states this at one point:

> In essence, the two big powers of the Cold War (and, as a side note, also the lesser components of the Warsaw Pact and NATO) spent the whole Cold War looking for an effective way to use chemical weapons against each other, and seem to have – by the end – concluded on the balance that there wasn’t one. Either conventional weapons get the job done, or you escalate to nuclear systems.

> But if chemical weapons can still be effective against static system armies, why don’t modern system armies (generally) use chemical weapons against them? Because they don’t need to. Experience has tended to show that static system armies are already so vulnerable to the conventional capability of top-flight modern system armies that chemical munitions offer no benefits beyond what precision-guided munitions (PGMs), rapid maneuver (something the Iraqi army showed a profound inability to cope with in both 1991 and 2003), and the tactics (down to the small unit) of the modern system do.

I take no exception to this, but basically no large army has encountered a case where they need quickly deployed area denial that is different from landmines. A massive retreat into the interior of a country may be such a case, but you run into issues where a decapitation against that state is probably going to be more effective.

For what it's worth, this is why Russia's concern of NATO countries walking up into it is nonsensical. It's just, perhaps, they never realized how nonsensical it was, as their defense planners do not have experience with a highly dynamic army. (But oddly they seem to have some idea of what might happen, as this is what likely led to their development of nuclear/neutron mortars and artillery. But any situation where those would come out is going to be ICBM time anyway.)


> That's not how it works? That's all I get? I'd refer you to the site guidelines, barging into a thread and going "NO U" is not a real conversation.

There's no point in detailing beyond that. Respectfully, it's like someone suggesting quicksand is a good way to stop tanks because they watched it in a cartoon. I'll add some commentary in good faith but I'm not going to comment beyond this.

The fact of the matter is that the amount of chemical product you need to try to slow down the enemy is so insanely large that it just doesn't make sense to use -- it doesn't make sense to produce, it doesn't make sense to prepare, it doesn't make sense to bother firing.

I invite you to pick your chemical weapons agent of choice (sarin, chlorine, whatever is the one), pick a spacial size you want to attack an enemy in and then do some back of the napkin estimations at how much of that chemical weapon product you would actually need in order to disperse enough in that area to achieve your objective. I don't want you to account for failed launches, or wind, temperature and so on, let's assume that every munition fired will go to the exact spot it needs to be and disperse perfectly.

You'll very quickly realise that chemical payloads are wholly useless. We're talking about in the magnitudes of hundreds or even thousands of rockets to clear out a small area.

Perhaps it made sense in trench warfare 100 years ago, but it doesn't make sense to use against a guerrilla (or even conventional) force in any modern time.

> We don't see people deploying chemical WMDs not because they are too expensive but because of political reasons, and after that, because they don't have them due to disarmament treaties. All it takes is someone deciding they really want to win for all of it to change. You can deny a huge area for weeks with a few chemical warheads. You can make a city inhospitable using less materiel than it'd take to flatten it.

None of this is true. Sorry, but it's just not. And hopefully following the exercise above you'll come to see it that way as well.


They go against the Geneva Protocol, and it's not even allowed to stockpile them so not even useful if you are a terrorist with a death wish because then there are simpler ways to end your problems.


Russia used them extensively in Syria quite recently, so the concerns are valid.


Given that the Syrian war is still ongoing, that seems to debunk the idea that it's as easy as "Drop nerve gas on a city, walk in a couple days later."


What?! This is not true! I've never heard anyone claiming russia did it. The mainstream consensus is that the syrian government did it, while a minority thinks it was either old stock getting released by accident or the rebels doing it.

Do you have a source? Because even with all the controversy surrounding international investigation and the theories that have spawned around that, russia wasn't even a possible suspect.


The allegations by OPCW are politicised[0] and based on theoretical chemistry, i.e hexamine as an acid scavenger.

That is to say: neither the Syrian government nor Russia have used chemical weapons in Syria. They haven't used them because they are -- for all intents and purposes -- useless. If you want to take down a group of people in flip flops and have access to a thermobaric[1] MLRS[2] you're not going to break international law so you can give one or two of them a scratchy throat with chlorine payloads (if you're lucky).

[0] https://wikileaks.org/opcw-douma/

[1] https://en.wikipedia.org/wiki/Thermobaric_weapon

[2] https://en.wikipedia.org/wiki/TOS-1


> They haven't used them because they are -- for all intents and purposes -- useless.

To your point, some numbers from wikipedia:

The LD50 of sarin is 39 micrograms per kilogram, so 0.0039 grams to kill a 100kg man half the time in theory.

Now consider the worst of the gas attacks in Syria was in Ghouta. Fatality estimates vary considerably, but the highest estimate claims 1729 fatalities. The attack is claimed to have been performed using at least 8 rockets, likely more, each with at least 50 liters of liquid sarin. That's a lower bound of 400 liters of liquid sarin, for an upper bound of 1729 kills. Liquid sarin's density is roughly the same as water, so that works out to more than 230 grams of sarin per person killed. And if you use the lower fatality estimates or the higher rocket count estimates, the numbers are even worse.

Conclusion? The delivery/dispersion of poison gas is incredibly inefficient. In practice you need tens of thousand times more nerve gas than the LD50 figures would have you naively believe.


The assertions in the Wikileaks docs are contested, and focus on a single incident when the war has had multiple.

https://www.bellingcat.com/news/mena/2020/01/15/the-opcw-dou...

For example, that https://en.wikipedia.org/wiki/Ghouta_chemical_attack happened is not disputed by Russia; they dispute who did it.


Bellingcat has been pretty reliable but their investigations around the chemical attacks were very... flawed. It happened for sure, but their analysis of how the events unfolded on the ground was so lacking (not their fault, OSINT can only get you so far in a chemical attack) that imo they probably should've just not published their initial articles. Doesn't mean they can't be right on the OPCW controversy, but it's still something to keep in mind

But in any case, while yes there is a dispute around who did it... Russia was never claimed to be the responsible by anyone. The two options are either the syrian government or the rebels, and that's true for all the chemical attacks.

So the GP was completely wrong, Russia did not use chemical warfare in syria!


Bellingcat quite literally created the propaganda -- they teamed up with Dan Kazesta to invent the hexamine nonsense, pushed it among the neocon circles (all of the ex-Just Journalism people, Atlantic Council, Foreign Policy, Henry Jackson Society affiliates, and so on) and then wrote their own narrative.

Their biggest target throughout their efforts? Ted Postol, professor emeritus of science at MIT, who they claimed was a Russian disinformation agent for finding the theoretical chemistry ridiculous and not practical.


> Fortunately we don't see any real work

Not really, having new developments all classified is not helpful to anyone.


The real fun starts when somebody starts using techniques like this that overcome the weaknesses of known chemical weapons and provide specific advantages. It's also kind of hard to monitor computational chemical research.

It's my understanding that the Soviet army doctrine in the '70s and '80s included the use of chemical weapons. That hypothetical threat put a hell of a lot of friction on NATO in terms of training, supplies, and preparedness.


that we don't see it doesn't mean it is not happening.


This will probably come in handy for industrial espionage type tasks.

Lets say you had a nation-state enemy who eats a lot of some ethnic ingredient. Come up with a cheap artificial flavor/color or process that is optimize to give heavy consumers cancer in 30 years. Not in one year, that will show up in the approval process. Then have an agent in the target country "discover" thru random chance this really excellent food dye or whatever.

Now you kill half the population with cancer, you're gonna get nuked in response, even non-nuke countries will be pissed off enough to get nukes just to nuke the perpetrator. But lets say you make the victims fat and sick and die a little younger just enough to get 1% hit on economic growth...

Some people would say this is how we ended up with trans-fats and margarine and vegetable oils in general or certain veg oils in specific.

Certainly, corn syrup has caused more human and economic devastation that fission, nerve agents, or most any WMD I can think of...


The problem with this is that this is basically genetic engineering, you might successfully make a low-level economic growth impact now, but future generations will be resistant to the poisons as those weak against them die off. You are securing your own demise long-term if you don't subject your population to the same.


This is far more fun than believing in an emerging accident. You don't have to eat corn syrup, btw.


I guess if it's tasty then it's fair game.


If this were really a practical concern, machine learning would be designing drugs that fly through the clinic today. They aren't and so this paper, though click-grabbing, is probably of no practical consequence.

One reason is lack of data. Chemical data sets are extremely difficult to collect and as such tend to be siloed on creation. Synthesis of the target compounds and testing using uniform, validated protocols are non-trivial activities. They can only be undertaken by deep pockets. Those deep pockets are interested in return on investment. So, into the silo it goes. This might not always be the case, though.

For now, the paper does raise the question of the goals and ethics around machine learning research. But unintended and/or malevolent consequences of new discoveries have been a problem for a long time. Just ask Shelley.


A successful drug candidate must be useful in the treatment of human medical problems and not have harmful side effects that outweigh its benefits. A weaponized poison may have any number of harmful effects without diminishing its utility. A compound with really indiscriminate biochemical effects, like fluoroethyl fluoroacetate, makes a potent poison without any specific tuning for humans. It's much easier to discover compounds that genuinely harm people than those that genuinely help them.


"Now, keep in mind that we can't deliberately design our way to drugs so easily, so we won't be able to design horrible compounds in one shot, either. "

I would discount this, heavily and concerningly, as a false sense of security. The reality is that prohibitive factors in creating new drugs from compounds discovered similarly (by AI or other automated process) is almost entirely due to testing safety procedures and regulations... If the bad actors are trying to find the most lethal compound with no such oversight - and chances are very high that they aren't bound by any such regulation if they're state-level labs operating under impunity - there is nothing but the synthesis that would make the formulation and testing of these as impractical as the author claims. Take away the years-long, heavily scrutinized and regulated multi-stage billion-dollar path to drug approvals and you'll find that barrier is not so high.

I would like to think this data could be helpful to any organizations looking to proactively develop detectors or antidotes for such compounds - especially if the threat was previously unknown to them.

Let's say an entirely novel class of toxin was found in a cluster of these predictions that has no existing references in private or public records - it could be that another organization has discovered and synthesized something similar through one of many other paths.

Many lines are drawn between this type of approach and that of whitehat hackers. You must necessarily create the vulnerability to mitigate it. It feels like "white hat" biolabs claiming the same are operating on the same conundrum and that the difference between "studying for the sake of mitigating" and "creating a weapon" are fundamentally indistinguishable without an absolute knowledge of intent - such is impossible from the outside.


> The reality is that prohibitive factors in creating new drugs from compounds discovered similarly (by AI or other automated process) is almost entirely due to testing safety procedures and regulations

Most drug candidates fail because they don't work, not because of any regulatory procedure. About 50% of drug candidates that enter Phase III trials--the final clinical trial before approval--fail, and that's almost always because they failed to meet clinical endpoints (i.e., they don't do what they're supposed to do), and not because they're not safe (toxicity is Phase I trials).


That "not working" part has some nuance to it as well. How well do we predict ADME? Is there binding with some off target protein that makes it terrible? Maybe it just doesn't bind to the desired target at all.

Toxins don't have those constraints, its not even about regulation. Making something that's safe is way harder than making something that is not safe, purely because of the complexity involved in making the thing safe.


I was thinking this as well. If a new drug works well for 99% of the people, has mildish side effects for 0.9% and is really bad for 0.1% of people, that's no good. But if a nerve agent kills 99% of people and is not effective on 1%, that's just fine.


Also, compounds that are fatal tend to be so to all life forms vs just humans with the variation being dosage. It goes without saying the odds of taking a compound and finding a drug that does one very specific thing without doing anything else.. and demonstrating that it's safe in people, is orders of magnitude less likely than finding a compound that is lethal.


Hype over crap like this grinds my gears. Organophosphines like VX are ALL toxic. There's about a zillion such toxic molecules all containing the same functional group. This study does not demonstrate that this tool is better generator of toxic molecules than anything that includes the basic rules of valence and rudimentary understanding of shape similarity.

When thinking about whether ML does something novel, we must always compare with some simple alternative. I would be impressed if it'd predicted something like Palytoxin, a highly specific molecule with extraordinary toxic activity. There's no way the tools of this paper would though.

-- director of ML at a drug company.


> ricin is (fortunately) not all that easy to turn into a weapon, and the people who try to do it are generally somewhat disconnected from reality (and also from technical proficiency).

Of course that fact was no barrier to much hype about the "dangers" it posed, either. I suspect the same now; that we have more to fear from the fear junkie propaganda than the actual facts.


I personally fear the lone-wolf attack drastically reducing in cost and effort. Where it would once be cost-prohibitive to design and manufacture your own nerve gas or lethal virus, these days with AI/ML and Crispr-cas and the like, it feels like any intelligent, deranged person wanting to take as many people to the grave with him has the tools to do just that.


Intelligent and deranged persons already have the tools to make way more casualties with way less effort using guns and/or explosives. The "problem" for them is that people who get sufficiently deranged to think that killing a few hundred (or even thousand) people will meaningfully solve the problem they are upset about will also be sufficiently deranged that their ability to reason coherently will be drastically reduced.


Would they? I'm not seeing that as necessarily true. The Unabomber seems like a good example: https://en.wikipedia.org/wiki/Ted_Kaczynski

Or look at mass shooting incidents: https://en.wikipedia.org/wiki/Mass_shootings_in_the_United_S...

The Las Vegas shooting was rationally planned and carried out. He managed to shoot nearly 500 people, killing 60.

They did happen to pick conventional weapons. But is that because of rational choice, or just familiarity and availability? Imagine somebody like Kaczynski, but instead of being an award-winning young mathematician, he was an award-winning industrial chemist or genomics student.


None of these examples got away with it, or even acheived any of their stated goals. If terrorism actually worked, it would be a lot scarier.

Also fortunately most political extremists don't want to kill just random people. Weapons of mass destruction don't discriminate, making them poor choices.


I'm not getting your point as it relates to the discussion here.

But for some goals, terrorism definitely works. Look at the US South after the Civil War. White terrorism worked for more than a century, from thousands of individual lynchings up through mass events like the Tulsa Massacre and Wilmington Coup. Or look at the number of shootings that involve misogyny, which is the enforcement mechanism for patriarchy. In both cases, violence is used to create fear to keep a population subordinate.

It also can work very well against occupying forces. Afghanistan, for example. Or if Russia takes Kiev, we'll surely see how well it works against Russian soldiers.

And it works very well to get attention and heighten tensions. Al Qaeda's 9/11 attacks were a big success for them on both counts. If you're in the "immanentize the eschaton" class of kook, which these days includes a lot of people from the far left to the far right, the heightened tensions are their own reward.

And even if it didn't work, that's not really the question. The question is whether somebody being crazy enough to think it will is enough to prevent them from succeeding. I'm saying the two aren't mutually exclusive. Aum Shinrikyo comes to mind here. As, in another way, does Russia. Would Putin using nuclear weapons ever keep him in power? I doubt it. But might he do it anyhow on one theory or another? Nobody can rule that out.


Terrorism didn't work after the civil war. The north was able to occupy the south indefinitely with minimal cost.

>the number of shootings that involve misogyny, which is the enforcement mechanism for patriarchy

Please translate this to english


Kaczynski did not optimize for death, really following the lead of the shockingly common political bombing campaigns of the 70s. The Las Vegas shooter might be a better example.


Sure, but I don't think that was a necessary outcome. Consider this quote: "I felt disgusted about what my uncontrolled sexual cravings had almost led me to do. And I felt humiliated, and I violently hated the psychiatrist. Just then there came a major turning point in my life. Like a Phoenix, I burst from the ashes of my despair to a glorious new hope."

I agree he went with something common to the time. But I don't think that was a necessary outcome. After all, his approach didn't achieve his goals, so we can't say his sort of terrorism is any more rational than aiming for something bigger. Indeed, the nominal goals he ended up with, one could argue that mass-death terrorism is more rational.


I think this is inevitable and something we will grapple with in coming decades. Especially around genetic engineering of viruses.


>>The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, ...We have spent decades using computers and AI to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade...

Of course now, the next step is to use the technology to preemptively search for and develop antidotes to the new potential weapons their tool has discovered.


We already (sort of) do this. AI/ML is probably used for simulating nuclear explosions, and is [arguably] even more useful and accurate than actually setting off a bomb, and measuring it.

It makes sense that it could be weaponized. When Skynet becomes self-aware, it would probably just design a chemical that kills us all, and would aerosol that into the atmosphere. No need for terminators, just big pesticide cans.


I don't think AI/ML is really used for simulating nuclear explosions. There's not much point, better techniques exist.


What such better techniques exist?


Knowledge of actual physics. Explosions can "easily" be simulated from first principles. Easily in scare quotes because it takes quite a bit of computing power. This was actually my wife's first job back in 2003, simulating missile strikes for the Naval Research Lab. A thorough simulation took a few days back then, but given that was almost 20 years ago, I'm sure it's a lot faster now.

In contrast, think of what you'd need to do this via machine learning. You'd need to gather data from actual missile strikes first and learn approximation functions from that. While it's certainly doable, this is inherently less accurate, thanks to both approximation error and measurement error. It's not like pixels -> cat where the true function isn't known.


Some finite element analysis packages come with a label saying "Please pinky promise you won't use this for developing nuclear weapons".

https://wci.llnl.gov/simulation/computer-codes

https://www.worldscientific.com/doi/10.1142/9789812707130_00... (google the author)

https://www.lanl.gov/orgs/adtsc/publications/nw_highlights_2...

https://erdc-library.erdc.dren.mil/jspui/bitstream/11681/678...

Details are obviously scarce but computers are pretty much the number 1 reason weapons testing isn't particularly necessary any more.


Details are obviously scarce but computers are pretty much the number 1 reason weapons testing isn't particularly necessary any more.

That's true, but it really applies only if you have access to historical nuclear test data. The computer simulations are reportedly parameterized using empirical factors that aren't openly published and aren't available to additional nuclear aspirants unless they run tests of their own. The alternative, if you don't have test data and don't want to test, is to build very simple, conservative bomb designs like the US did with the Little Boy gun-type uranium bomb. It wasn't tested before it was dropped on Hiroshima. That's also the kind of design South Africa used for its secret nuclear weapons program:

https://en.wikipedia.org/wiki/South_Africa_and_weapons_of_ma...


I'm quite sure we have already invented several chemicals that match your description -- like sarin gas, invented in 1938 by someone who, indeed, wanted to create a decent pesticide. A lethal dose of sarin gas is something like 28-35m³/min over 2 minutes exposure, according to wikipedia. [0]

Hitler was well aware of its creation, and I believe quite a lot of the stuff was produced for the purpose of warfare. There were several in the Nazi military who wanted to use it, but Hitler declined.

That seems rather odd, given his indifference to exterminating people with gas on an industrial scale beyond the theater of war. It has been suggested that Hitler was probably aware that to use sarin gas would be to invite the allies to do so in response, which would result in a dramatic loss of life on the German side due to the sheer lethality of such chemical weapons. [1]

Perhaps he thought it easier to stick to conventional warfare, in which the pace is more manageable than with WMDs, where you would start going down the road of mutually assured destruction but without the strategic framework in place to prevent anyone from actually wiping out a population before realising how bad an idea it would be.

And I think this reluctance to change the game, this seemingly deliberate moderation, perhaps best demonstrates the true difference between the machine and human in warfare.

It is not a difference in innovation -- we have always been very good at inventing highly optimised ways to end life.

The difference is that a machine intelligence will not hesitate. It will not ask for confirmation, pause or break a sweat. It will pull the trigger first, it will point the bombs at anything that is an adversary and anything that could theoretically be or become an adversary, and it will not miss. And it will not have to face ethical criticism and historical condemnation afterwards. [2]

[0]: https://en.m.wikipedia.org/wiki/Sarin

[1]: https://www.washingtonpost.com/news/retropolis/wp/2017/04/11...

[2]: Assuming this is a Skynet-like machine intelligence, which doesn't really have the capacity for remorse or negotiation and seems primarily, indeed solely occupied with the task of ending human life.

Obviously, a true AI that is essentially a conscious mind equivalent to our own minds, may experience the same hesitancy that most of us would, were our fingers to be over the buzzer.

Unless the AI independently arrives at a different set of values to us, like the Borg or something.


Chemical weapons are expensive. Consider the logistics and training required to effectively deploy them, plus any specialized equipment. Meanwhile, they're only useful as long as your opponent doesn't know you're planning to use chemical weapons, since countermeasures are relatively cheap and every major military knew what to do about them by the time WWII broke out. As soon as your enemy knows to beware chemical attacks, all you're doing is annoying them while making it hard for your own troops to advance (they have to put on chemical suits/masks themselves, or else wait for the gas to disperse). Very hard to use effectively in maneuver warfare. They didn't even prove very effective in WWI, which was much closer to an ideal environment for their use.


Its the classic logistics problem. Scaling ratio of weight vs volume or something like that. Just like nukes, if you heat an enemy soldier to 100M degrees he isn't any more dead than heating him to 10M degrees and volumes expand very slowly with mass so making bigger and bigger bombs is a fools-errand.

Same problem with chem weapons. You hit a tank brigade with 1000x lethal dose they aren't any deader than if you hit them with 1x dose. But if the bomb misses which is likely, all you've done is REALLY piss them off. Nerve gas in an empty wheat field just kills a bunch of corn bugs but it really pisses people off. If you target their tank brigade and miss, they'll target your home town, as we did to them with conventional bomb even without having been nerve gassed to start with. If you target their home town then the brigade you missed is going to be unhurt and really angry. Its the kind of weapon thats pretty useless unless you have infinite supply and infinite logistics. Like cold war USA or cold war Russia.

The allies had better logistics than the Germans so they knew the second time around in WWII that trying to go chem is just going to end up in the German's getting more chem'd than the allies.

Another issue is WWI and previous its all about siege warfare and breaking sieges where WMD is awesome and useful, whereas WWII and newer is all about maneuver warfare and blitzkrieg and all of Germany's plans and all of their early success were based on the idea that anything in range of shells or aircraft today is going to be occupied rear supply area next week at the latest, so destroying it would be pretty dumb because we need that area to be the rear of the battle space next week. For a modern comparison the USA could have nuked the green zone in Iraq and there's absolutely nothing anyone could have done about it, but 'we' knew we'd be occupying the green zone and needing something like the green zone, and the green zone is sitting there for the taking, so in an incredibly short term perspective it would have saved troops and saved time and saved effort to just nuke it instead of taking it the old fashioned way, but in medium and longer term it would be counterproductive to war efforts to use WMDs against the green zone, so we didn't.


Hitler probably also experienced gas (not sure his generals did, though). People forget that he was actually a decorated NCO, from WWI (which had a lot to do with his terrible attitude, later in life).

It was fairly worthless, militarily. High risk, big mess, no real tactical advantage, and it just pissed everyone off. Its only real efficacy would have been for bombing civilian targets, and I don't think they had the delivery mechanisms.


The article assumes that full development of a new chemical weapon would require more development effort. With regards to military usage: storable at room temperature, relatively easy to manufacture from commonly available precursor chemicals, etc. [1]

How true is that? Are there components of this process that make things easier now? Where I have chemical structure X, and a system generates the process steps and chemicals needed to produce X. How much of the domain in chemistry / chemical engineering has been automated these days? What are the future prospects for this?

[1] I assume one of the design goals for a new chemical weapon for military use is that it breaks down in the environment, but not too quickly (like say in a week or a month). Though I suppose if you want to just destroy civilization you would design for longevity in the environment instead. And being able to seep through many kinds of plastic if possible.


> Where I have chemical structure X, and a system generates the process steps and chemicals needed to produce X.

Undergraduate chemistry students spend a fair amount of time learning how to look at a novel structure X and by disconnecting "backwards" it into simpler components, deduce a route by which it might be synthesed "forward" in the laboratory from readily available starting materials.

There's an excellent book on this, "Organic Synthesis: The Disconnection Approach", by Stuart Warren.


Interesting.

Were / are you a chem major?

Any other major topics or readings you could recommend for someone wanting a general understanding of key concepts in modern chemistry? I'd suppose generally: materials, synthesis, o-chem, and chem-eng.

My own background: began a hard-science degree. One year undergrad uni chem.


The field is called "process chemistry". A very big thing in pharma:

> Process chemists take compounds that were discovered by research chemists and turn them into commercial products. They “scale up” reactions by making larger and larger quantities, first for testing, then for commercial production. The goal of a process chemist is to develop synthetic routes that are safe, cost-effective, environmentally friendly, and efficient.

https://www.acs.org/content/acs/en/careers/chemical-sciences...


Thanks, though my read is that this is not just pharma, but applies to numerous fields. Say, o-chem, semiconductors, nanoparticles, and more.


While it's worrying and worth thinking about, the track record of using AI to generate pharmaceuticals to do good has been "mixed", except really it's just been a bust. It may someday do great things, but not much yet, and one silver lining is that AI-generated toxins are unlikely to improve on the human-designed ones, either.

"That is, I'm not sure that anyone needs to deploy a new compound in order to wreak havoc - they can save themselves a lot of trouble by just making Sarin or VX, God help us."


> the track record of using AI to generate pharmaceuticals to do good has been "mixed", except really it's just been a bust.

Researchers have only been using AI for drug development for like 6 years, I think it's way to early to call it a bust


I guess I should have said "...thus far".


Wow - I picked up on this earlier today, and even quoted the same as in this article. I was amazed that the scientists had not considered that AI could be/is being used for harm. (Was downvoted for this, but whatevs.)

It struck me as incredibly naive, but then - what would someone else do in their situation? Most of us work in silos without awareness of how our work is used, and I suspect we are often causing (unintentional) harm to others whether we are scientists, programmers, in finance, in health, in government, etc. If we realise our predicament, there isn't an moral authority to make things right. There is only the legislation that was been written by lobbyists paid by the corporations we work for.

Putting the article in broader context, perhaps it is about the creation of a moral framework for AI intended to pacify our disgust at the system we find. I expect that we will be expected to look away as AI "ethics" committees justify the unjustifiable, but call it ethical. As whatever-it-is is found to be ethical after all by ethical authorities, most of us we will wave this through and consider that we have acted judiciously. IMO.


Chemical weapons above a certain level, bio weapons and nucs are all seen as weapons of mass destruction and are not really that useful tactically. Introducing strategic-destabilizing elements to a conflict greatly increases its unpredictability and probably is a health risk for the leaders involved.


This is essentially what the pesticide and herbicide industries have been doing since their inception, i.e. designing molecules that efficiently kill animals, insects and plants. It seemed like a miracle at first, but the long-term consequences of things like persistent chlorinated aromatics and their derivatives (Agent Orange and dioxin for example) eventually appeared in human populations.

The development of the toxic nerve agents (organo-phosphate compouds mostly) in particular was a side effect of research into insect toxins. The nerve agents were discovered in this manner, they worked too well. Nevertheless, these pesticides were deemed safer than the organochlorines because they degraded fairly rapidly after application (although they are implicated in nerve damage related diseases like Parkinson's in agricultural areas).

Insect infestations are indeed a big issue in agriculture and can wipe out entire crops if not dealt with, but there are plenty of options that don't require applications of highly toxic or persistent chemicals.

Otherwise, this is just another of the many issues modern technology has created. Smallpox is another one - in the late 1990s, there was a great debate over whether to destroy the last smallpox samples - and then in the mid 2000's, someone demonstrated you could recreate smallpox by ordering the appropriate DNA sequences online and assembling them in a host cell. Then there's the past ten years of CRISPR and gain-of-function research with pathogenic viruses, a very contentious topic indeed, and still unresolved.


Many years ago a colleague who works in defence told me about a job posting he'd seen but was having a moral struggle with.

The opening was for "Lethality Engineer": Ideal candidate with good physics and medical background.

I said that the main perk was that at least on Halloween he wouldn't need to buy a costume. He could just go out as himself.

He didn't take the job.


I hope recent events have illustrated that if it weren't for the people who develop lethal weapons, we (as in you and I) would be helpless against the bullies of the world. Unilateral pacifism is cute philosophy only when there are rough men standing ready to do violence on their behalf.


I strongly agree with this sentiment. However, it is hard for an ethical person to participate in developing war technology when possession and usage of the weapons is purely a political question, and history also has seen our side of geopolitics commit atrocities.

My stance has previously been that I am unwilling to work on weapons technology, because history has shown that these weapons sometimes end up being used for an indefensible cause. Then all of a sudden you're an accomplice to murder, and getting away with it.

In the light of Russia's invasion of Ukraine, which is just a continuation of its historical imperialism, working on weapons is something I would be perfectly okay with and probably even motivated to do. But stop for a moment and think what a history of aggressive military actions does to our society's ability to recruit for this important job.


I was a reviewer for a book on ethical machine learning that wasn’t published. I’ll never forget, the author stated “don’t work on anything that could cause harm.” Here I am reading this while working in defense being like “that’s a lazy and dumb position.” Nearly anything in the wrong hands could cause harm.

It’s not unethical to work in the auto industry because people can die in car accidents. It’s not unethical to work in the beer business because people can become alcoholics. It’s not unethical to work for a credit card company because people can bury themselves in debt. And it’s not unethical to work in defense because the weapons may fall into the wrong hands.

What’s unethical is encouraging these problems and not trying to prevent them. And yeah, it’s hard to navigate these ethical issues, but we’re professionals like doctors and lawyers and part of the reason we get paid like we do is because we may have to wrestle with these issues.


> It’s not unethical to work in the auto industry because people can die in car accidents.

Why not? Cars are pretty fucked in a lot of ways. I wouldn't consider someone working in the auto industry to be morally bankrupt but that doesn't mean that automobiles are not ethically ambiguous

> It’s not unethical to work in the beer business because people can become alcoholics

Why not? You are producing a dangerous drug that is constantly killing people. Would you say that people manufacturing illegal heroin are absolved from their externalities?

> It’s not unethical to work for a credit card company because people can bury themselves in debt.

Also why not? Credit cards, like automobiles, are pretty fucked in a lot of ways.

> it’s not unethical to work in defense because the weapons may fall into the wrong hands

Once again, why not? It seems as though weapons have consistently fallen into the wrong hands for all of history.

Im not saying that all people in these industries are bad people, but we can't pretend that our actions have no externalities just because those externalities are accepted by society as normal


Let me give you a few more examples:

* Ropes can be used for hanging, and even for racially motivated murders as they were in the south. Does that mean it's unethical to work for a rope-making company?

* Paint thinner can be huffed recreationally for a high. Does that mean it's unethical to work for a paint thinner company?

* Computer security course knowledge can be used to hack systems. Does that mean teaching or learning computer security is unethical?

The thing about "externalities" is that, while some of them can be blamed on the producers of the products themselves, the blame for others lies on the people using the products. While in the above three cases I listed, the answer is more obviously that the people using the products in a harmful or irresponsible way are to blame, the assignment isn't always as clear cut in other cases.

That being said:

* Saying that working in the auto industry is unethical in general because of car accidents is silly, especially when we are talking about accidents due to human carelessness. Driver education in the US is atrociously limited as-is. While I would agree that some accidents can be attributed to manufacturers designing cars badly, or some other problem on the end of the company, the fact is that, if you put a sufficiently stupid person behind the wheel of a huge metal apparatus capable of going faster than a cheetah, bad things can and do happen with that person behind the wheel.

* Saying that working with alcoholic beverage production is unethical is in my view rather silly. While one could argue, and I would agree, that marketing specifically to drunks is unethical, there are plenty of people (myself included) who have drunk beer and wine, but never gotten addicted. Comparing heroin (a far more addictive and dangerous drug) to a drug that many people have been able to use without getting addicted is poor argumentation.

* Credit card companies may often have predatory practices, but that does not mean that every person who goes into credit card debt isn't using those cards in an imprudent, irresponsible way. While some of them are victims, most would, in the absence of credit cards, be falling prey to some other vice.

* As for defense: while weapons can and do fall into the wrong hands, nothing would change even if we magically went back to the pre-firearm era: bullies would just use swords, clubs, and trebuchets instead of firearms and missiles.

There are many externalities where the blame lies squarely on the companies (pollution, global warming, overuse of plastic, environmental degradation). But to assume that all societal issues involving industries can be blamed solely on the companies and people working in those industries is naïve.


> But to assume that all societal issues involving industries can be blamed solely on the companies and people working in those industries is naïve.

I never suggested this at all. I am just saying that there are externalities that our work generates and we should be critical of that instead of pretending that it doesn't matter.

As for the examples you gave, I think it's pretty obvious that there is a major qualitative difference between manufacturing rope and designing weapons that are regularly used to kill people. One provides general utility and the other is specifically designed to mame and kill.

> Does that mean it's unethical to work for a paint thinner company?

Yeah why not? Shouldn't we be trying to manufacture non-toxic chemicals that don't poison people and the planet. People working at those companies arent solely responsible by any means but that doesn't mean that their actions don't have serious negative affects on society and the planet.

Also cars are bad for many more reasons that just accident. Cars are loud, they pollute, they take up valuable space. Building roads and parking lots to support them destroyed entire neighborhoods, historic buildings and public space. They also isolate people, encourage wasteful sprawling urban design, and make it more difficult for disabled people to get around. The list goes on and on


> I never suggested this at all. I am just saying that there are externalities that our work generates and we should be critical of that instead of pretending that it doesn't matter.

I should have been more clear that this was a general statement rather than a paraphrase of you argument. My apologies.

There is a qualitative difference between ropes and guns with respect to this argument. I agree with that. I would argue that there is less of one between that and the other things I listed.

> Yeah why not? Shouldn't we be trying to manufacture non-toxic chemicals that don't poison people and the planet.

To be clear: I was arguing whether people recreationally huffing paint thinner was a valid argument against working for a paint thinner company. There may very well be other, valid arguments for not working for a paint thinner company, but people huffing paint thinner contrary to all common sense and the instructions on the container isn't one of them. This also applies to the earlier point made in the thread about car companies: there are many potential ethical issues with working for them, some of which you have listed, but I don't think, say, human-stupidity-causer error by distracted drivers is one of them.

Also, while using non-toxic chemicals is desirable, sometimes it's not an option. This is why, in my example, I chose paint thinner (which AFAICT oesn't really have a non-toxic alternative) and not something like freon, which not only does have alternatives but is arguably much, much more environmentally damaging than, say, mineral spirits.


Understood, thank you for clarifying. It seems as though we are in general agreement here. I appreciate your candor!


I'm not sure I've sufficiently communicated the background of my moral ambiguity here. I came of age during the War on Terror; the years where Iraq, Afghanistan and Syria were the primary fronts for Western military power. The brutal necessity of the Western world standing up to aggression against dictatorships was not so obvious during these years; from my vantage point of Western media the impression was that dirt-poor suicide bombers were the biggest risk to our civilization. And we were dealing with those with an aggression that at best left a dubious aftertaste.

One could be excused during these two decades for erroneously assuming that the world has for the foreseeable future moved on towards trade and economic competition, rather than wars of aggression. With nuclear weapons ensuring the balance. It was probably naïve, but not helplessly naïve. Against this backdrop, regularly seeing weddings and maybe-civilians bombed from drones on dubious intel, it doesn't seem like a childish or cowardly stance to just turn one's back on the weapons industry. I'd call that a reflected decision.

The same reasoning is almost palpable in European politics, which made a 180 degree shift away from this in the two weeks after Putin dispelled these notions. My point is, it wasn't obvious from where I stood that we would be back here today. Now that we are, the calculus seems clearer.

Maybe with a more measured US-led use of military force since 2000, Western defense politics wouldn't have required so much hand-wringing.


You can be happily manufacturing weapons for a good cause today only to see your government turn evil the next day. Unfortunately people don't cluster around ideas but geography and that's is out of control for most.

This does not mean we shouldn't do something but we have to realize nothing is permanent and the fruits of our labor can very well be misused the next day.


Like anything difficult there are real risks and trade-offs, but just refusing to engage in difficult pragmatic issues is not the ethical position imo, it's just the easy one that feels good. It puts the burden of actual complex ethical decisions onto other people.

The west needs the capability to defend the ideals of classical liberalism and individual liberty. In order to do that it needs a strong military capability.

https://zalberico.com/essay/2020/06/13/zoom-in-china.html


>> It puts the burden of actual complex ethical decisions onto other people.

People who may not have even considered the ethical situation. It seems the people who are concerned about the ethics or morality of a necessary but questionable job are exactly the ones you want in that role (although not activists who would try to shut it down entirely).


By your logic, engaging in weapon manufacturing is the only accepted conclusion. In fact, people that refuse to do so are participating just fine, even though you don't agree with their contribution.


My logic is that refusing to engage is not an ethically superior position when the capability is necessary. Engaging in difficult, high-risk, but necessary issues as best you can is.

That doesn't mean everyone needs to work on weapons, just that the work on weapons is necessary and those that do it are not ethically compromised in some way. It's just a recognition of this without pretending not engaging is somehow more morally pure. Not engaging is just removing yourself from dealing with the actual hard ethical issues.


an interesting thought given the politics of the day. If you are not actively engaged in weapon manufacturing are you not complicit in the murder of the Ukrainian people? If you are not actively helping to supply the Ukraine army with weapons for their defense then, by your inaction, are you enabling their death?


The flaw in that logic is that, if it weren't for the people who develop lethal weapons for the bullies, we wouldn't have to fear the bullies.

Also, I think the design space of "radical defense" is under explored. Our (western) armies are still designed for attack and force projection, although we have long since renamed our war ministers secretaries of defense.

But I wonder if you could develop defense capability to make your country unattackable. Not by threat of retaliation, but for example by much much stronger missile defense. Or by educating ("indoctrinating") your own population, so that an occupier would not find a single collaborator? Or by mining your own infrastructure, and giving every citizen basic combat training (a bit like the swiss)? Or by fostering a world-wide political transformation that is designed to prevent wars from happening at all?

I think if we wanted to spend money researching stuff to keep us safe, it doesn't necessarily have to be offensive weapons.


The flaw in this logic is somewhat related to law enforcement, in that if your military is min/maxed for defense, someone who wants to do you harm only has to be right once in order to actually do you harm. Looking at nuclear weapons and missile defense (ignoring the existence of dirty bombs etc.), your opponent needs to only be right once for one of your cities and hundreds of thousands of civilians to be gone. And likewise, if you've focused on defense you're likely wholly unprepared for any sort of retaliation.

The Swiss approach what with literally bunkering in the mountains and everything is interesting, but the logistics for larger countries would be exponentially harder (and most lack the geographic help). "Fostering world-wide political transformation" is so pie in the sky it's honestly not worth serious discussion. It's fanciful.

Someone will always be willing to make weapons for the bullies because a lot of people don't view them as bullies in the first place. Ask people in Iraq, or Chechnya, or Ireland, or Pakistan, or Taiwan, who the bullies are, and you'll get wildly different answers that will cover approximately 90% of the worldwide population.


> The flaw in that logic is that, if it weren't for the people who develop lethal weapons for the bullies, we wouldn't have to fear the bullies.

You can't uninvent weapons, and you can't prevent the bullies from making their own weapons.

The problem with an impenetrable defensive shield is that it gives your potential enemies the heebie-jeebies (technical geopolitical term) that, now that you have the shield, you can attack them without fear of reprisal. If the enemy thinks you're working on a credible shield (or even a shield you think is credible) their best option is to attack now before you, emboldened by your sense of invulnerability, attack them.


This is an ongoing concern of US weapons policy. By refusing to back down from improving our missile defense capabilities, we undermine MAD and our adversaries’ willingness to engage in disarmament (thereby making it more likely these weapons will be used).


> The flaw in that logic is that, if it weren't for the people who develop lethal weapons for the bullies, we wouldn't have to fear the bullies.

False. Bullies are a problem even if no one has weapons beyond what can be grabbed and used from the environment without any invention. Heck, bullies are a problem if everyone just has the weapons built in to their bodies.

> But I wonder if you could develop defense capability to make your country unattackable.

Not without incidentally developing a huge edge in offensive weapons that would make you attackable when it inevitably diffused to others. Uniquely defensive technology mostly doesn't exist.

> Not by threat of retaliation, but for example by much much stronger missile defense.

Much better interceptor missiles mean the technology for much better missiles generally. Directed energy interception means direct energy weapons. Hypervelocity kinetic interceptors are general purpose hypervelocity kinetic weapons.

> Or by educating ("indoctrinating") your own population, so that an occupier would not find a single collaborator?

That kind of indoctrination can also be used offensively, but the enemy doesn't need collaborators to attack you. (They might need it to conquer without genocide, but attackers willing to commit genocide for land are not unheard of, nor are attackers whose goal isn't conquest.)

> Or by mining your own infrastructure, and giving every citizen basic combat training (a bit like the swiss)?

Mining your infrastructure is itself creating a vulnerability to certain kinds of attacks.

> Or by fostering a world-wide political transformation that is designed to prevent wars from happening at all?

It's been tried, repeatedly. The League of Nations, the Kellogg-Briand Pact, the UN. It’ll be nice if someone ever finds the “one wired trick to prevent war forever”, but it seems distinctly improbable and particularly suicidal to bank your defense on the ability to find it.


Especially since we cannot really help Ukraine with anything but tech and they are outnumbered, so the only advantage we can give is how much better our weapons are than Russian ones.


Have you considered, though, that we (as in you and I) might be some of these bullies in the world and that these rough men aren't just standing by, but are actively doing violence on our behalf ? One needn't look much further past recent events to find examples aplenty.

I understand the point you are trying to make, but it's not as easy as pretending that the weapons "we" develop are purely for morally and ethically righteous purpose.


With respect you may be making some unfounded assumptions about what I've built and what I believe.

My point was really about the fact that this job title "Lethality Engineer" actually exists. And moreover, that it asked for medical qualifications, which would go against any doctor's Hippocratic oath.

Most of us who've done defence related work are happy at the edges, with tactical information systems, coms or guidance (my stuff ended up in targeting).

But when it comes down to figuring out how fragments can be arranged around a charge to make sure the waveshape optimally penetrates as many nearby skulls as possible... hmmm suddenly not so gung-ho about it.

That's not a distant, theoretical morality about tyrants and bullies. I've no problems contemplating my family's military history and am plenty proud of it, even though we'd all rather live in a world without this stuff.


Until the lethal weapons are turned on those (countries, groups, people) who develop them. Kind of like gun owners are more likely to be harmed (or harm others) by their own guns, notwithstanding the arguments about personal protection used to justify such ownership.

This has already happened with groups the US has armed in the past. The US itself has been the bad guy sometimes.

There is no proper resolution to this struggle, and people who are guided by their conscience should not be attacked for having a "cute philosophy" that relies on "rough men standing ready to do violence on their behalf."


A sibling comment put it well that refusing to wrestle with these important questions is the unethical position as it just pushes the decision off onto other people. "Cute philosophy" is a perfect way to describe that because it's completely untenable if everyone were to think that way.

The gun thing is completely tautological though. Yes, if you have a gun you're more likely to be injured by your gun than someone who doesn't. How would someone who doesn't own a gun be injured by their own guns in the first place? It's like saying you're more likely to be in a car accident if you own a car. Of course you are.


If everyone were to think that way there would be no need for those weapons in the first place.

When I said gun owners are more likely to be harmed by their Gus, I meant as opposed to using the gun to protect themselves. Instead of an incident where the gun came in handy, it is more likely the the gun is used in a wrongful way or against oneself. I’m not sure where the tautology is


I generally agree with this, however, the rough men willing to do violence on our behalf are more and more becoming the quirky scientists who are very disconnected from the actual impact of their work. I think there's a big difference between those types of people. It seems like people don't feel the weight of violence as much as they used to. I imagine this will increase as we develop more AI driven weapons.


This doesn't square with my experience in defense. I worked in software and we saw plenty of combat and aftermath footage and were always aware that the design decisions we made and the tools we built meant life-and-death for someone. We did our best to make sure it was the right people.

I'd add, the weight of violence—if anything—is going up. People today are devastated when a dozen soldiers and scores of civilians are killed in a suicide bombing or urban conflict, but go back to Vietnam and those incidents barely register because they happened all the time. The number of people killed in any given armed conflict has dropped quite a bit in the last 50-or-so years. (The Syrian Civil War is one big exception.)


Thanks for sharing your experience. To your last point, I think it's a tradeoff. The number of individuals getting killed is going down, but we're closer than ever to the ability kill everyone more easily (beyond nukes: weaponized viruses, etc). Scientists who are drawn to a field of research may not be practically connecting the dots about what they're actually working on, or the full implications of their work (eg: gain of function research). These are the people I'm referring to when I mention them not realizing the weight of violence that they are contributing to.



This comment is confusing two completely separate things. There's a world of difference between not being willing to defend yourself and actively trying to come up with more aggressive and lethal weapons.

The argument that "we need defense!" only justifies the need to stockpile and develop _sufficiently_ lethal and tactical weaponry to neutralize incoming threats (like anti-ballistic systems). It doesn't justify inventing deadlier weapons. No dispossessed victim of foreign invasion has ever needed a bioweapon to assert themselves, and there's no chance developing one would ever be used for anything but war crimes. You should absolutely turn down roles like "Lethality Engineer" from an ethical standpoint, even if you agree military defense is necessary.

People raise the spectre of deterrence as a utilitarian justification for needing more powerful weapons ("har har, they'll think twice about attacking us if they know we have nukes!"). But that's narrow thinking. Deterrence can be achieved in other less-damning ways, like strategic alliances and building more robust defense systems.

tl;dr defence != deadlier offence.


We are acting pretty helpless because the bully has nuclear weapons.


> Many years ago a colleague who works in defence told me about a job posting he'd seen but was having a moral struggle with.

This is a good struggle to have. What's ironic in many cases is that we don't experience these quandaries in other jobs, but the ethical and moral ramifications still exist. The early days of search in Google or social in Facebook probably didn't elicit the same kind thought process as a lethality engineering post. (Anecdotally I spoke some years ago with an acquaintance Googler who told me that he enjoyed working there precisely because he was working on privacy issues that worked against some of the advertising side of the business.)

I've worked in telecommunications, industrial systems engineering, and energy. There are ethical and moral issues in the work that I've done/do as a contributor in each of those domains, even though I'm not involved day-to-day in decision making that feels particularly moral.

One of the base assumptions we probably need to make in our work is that whatever we do will always be misused in the worst possible way. If we explore that idea, it might give us some sense for how to structure our output to curtail the worst of the damages.


> The early days of search in Google or social in Facebook probably didn't elicit the same kind thought process as a lethality engineering post.

It did for at least one person (me). I was 16 in 2004 with 11 years of dev experience, trying to decide whether to go out to SV, go to college for CS, or do something else. I was from the same city/community as Larry Page and in Zuck's age group, so it wasn't an absurd consideration to try. Lots of things went into my decision to do something non-CS related for college, but morals were one of the reasons I didn't go to SV (I objected to the professionalization of the web + Zuck creeped me out + I didn't agree with cutting out humans/curators from the search process like Google did).

It's just that until very recently, people either thought I was lying OR that I was just batshit insane. Who is invited to a gold rush and doesn't go?

I can't imagine I was the only one.


> I can't imagine I was the only one.

I'm sure not, and hopefully the description I provided isn't a blanket one. And, to be clear, I'm also not trying to say that working for any of those organizations is per se unethical. I don't think that this is the case.

The point, rather, is that ethical and moral considerations are actually much nearer to us than might appear at first blush. Sometimes this happens by the mere nature of the work (killing people more efficiently) and sometimes by scale (now when we surface search results, we make direct impacts on what people learn, where they shop, how they receive advertisements, etc., none of which was true in 1999). Navigating this isn't easy (indeed, you can make an argument that there is a morally good outcome for killing people more efficiently; I'm not saying it's necessarily a good one, but that one can be made), but we don't routinely equip people to think about it.

To make matters worse, our cultural assumptions shift over time. The Google/Facebook difference is illustrative. Page and Brin are a generation older than Zuckerberg, and their assumptions about what it means to be moral are probably not the same. These assumptions also change based on circumstance--when we scale a business from a garage to a billion dollars, it's hard to maintain the True North on your moral compass (assuming such a thing exists).

Anyway, I think a deep skepticism about human nature and the utility of technology is probably very useful in these situations.


But is the world better off if moral people avoid immoral jobs?

I believe the world shows there is plenty enough supply of talented people that are willing to do immoral jobs. So removing yourself from the pool of candidates makes little difference.

Alternatively, one could work in an immoral job and make a difference from the inside.

Why not do that? Perhaps to feel impotently virtuous, or perhaps the work couldn’t be stomached by the virtuous, or perhaps the virtuous but weak are scared of losing their virtuousness...


I think you were hard on him. There should be no ethical qualms when our weapons are used on enemies who seek to kill us or attack our interests.

Also, if an ethical person doesn’t take this job, someone far more unethical probably will. And they will raise no objections if they should ever be necessary. Kind of like how a lot of bad people become police officers when no one good wants to do it.


I can relate to that. In my career I stepped out and into defense, and it never really bothered me that much to be honest. But then it was always things like fighter jets and helicopters sold to NATO members, I never had to rationalize that we build the weapons carrier and not the, e.g., missiles that actually cause harm.

I always drew the line at small arms so. Way more people die because of those, they end up in every conflict and there have been too many scandals of those smalls arms manufacturers circumventing export restrictions. Quite recently I added supporting countries like Saudi and the UAE to that list, even the job would have been really interesting, providing highly sophisticated training services to the Saudis is nothing I could do and still look myself in mirror. And civil aerospace is fun as well.


I worked in defense too, might go back. When I get calls I'm like "I don't do work for the Saudi's or the DEA" and half the time the recruiter is like "Uh, I said this job is for Raytheon."

"Yeah, but who's their client?"


One way or the other, regardless of the company, properly one if not both of those countries. Sure, those countries are rich, I just hope that Ukraine showed us in the Democratic west that certain values, like human rights, shouldn't be compromised upon, which we all did in the last decades.

I do understand why those companies chase Saudi and UAE contracts, that's where the money is. Maybe that changes if NATO members increase defense spending, it would be a nice side effect, wouldn't it?


sadly, yeah, the big contractors work for anyone its legal to work for. I'll just make sure I don't end up on a program working for scumbags. If I get canned cause I won't work for someone, I get canned and life goes on. My security clearance is worth a whole lot to the right person


A friend is a very good university lecturer in physics, and a pacifist. He isn't particularly please about the fact that a decent number of his students will turn the particular lessons he teaches towards the production of weapons.


Lethality Engineer? Is that a P.Eng. kind of position? If your work doesn't actually kill anybody, could you be sued for malpractice and lose your license?


They don't take anybody, the interview is murder.


So, the gist of this article is "oh look, machine learning can be used to build super-weapons!". Of course it can, and we have no shortage of (other) tech to make our life miserable. And of course the problem is not the tech itself but the people and their institutions. That paper from Urbina et. al. is at best academic click-bait. They are optimizing for publishing something and get cited, not for social good. They should stop.


Interesting exercise, perhaps the harmful molecule generating AI still generates helpful molecules because molecules harmful at a certain dose may sometimes be very beneficial in a (much) lower dose. And the other way around of course.

Perhaps we should simply have one “biologically active molecule” generating network. The dose will ultimately determine the toxicity.


> And the other way around of course.

Whaaat? Are you saying there exist molecules which are very harmful in a (much) lower dose, but are beneficial at a higher dose?

Do you have any examples?


So, as I said my remark didn’t come out right, but, some molecules may be considered harmful at low dose and harmless at high dose if they stabilize a deteriorating conduction at or over some threshold concentration. Yeah I know it’s a fetch but you got me thinking… It’s not that clear cut.

I mean the urine of someone on chemotherapy is pretty toxic, still we consider the molecules beneficial to the patient overall (the patient-tumor system if you will, not the patient by themselves).


I am saying that the network that comes up with “good” molecules will produce molecules that are very harmful as well, presumably at higher doses.

I mean take some beta blockers (helpful molecules) at 100x normale dose: pretty harmful.

Edit: Yeah my original comment didn’t come out right, I agree.


I couldn't help but think of homeopathy with the above sentence.


Some snake venoms will stop your heart… but at a lower dose they will simply ease the heart and lower your blood pressure. For some examples: [0]

[0]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832721/


Anyone wanna ELI5? It's useful for bother explainer and receiver. ;)


Software is able to simulate the effect of chemical compounds / molecules on the human body. This can be used to find drugs that do specific things, or stronger versions of existing drugs. For example, you could look for very strong but very short acting sleeping pills that immediately make you fall asleep, but cause zero grogginess the next day. Or you could optimize antibiotics to have a high half life, so you only have to take them once, instead of 3 times a day for a week, which you can easily forget.

Now think about nerve gas. We have discovered lots of different nerve gas agents and know pretty well how much of each type you need to kill a human. Said software can be used to find new versions of nerve gas that kill with even lesser concentrations. You could also optimize for other variables: Nerve gas that remains on surfaces and doesn't decay by itself for example.


Before, computers were used to make less poisonous chemicals.

Now, the people asking computers to do that realized they can ask the computers to make more poisonous chemicals.


They had an AI that looked for safe drugs by minimizing an estimate of lethality, changed it to ‘maximize’ and the computer spewed known nerve gas agents.


I'm sure I'm not the first person to consider this, but ...

RNA molecules can often be "evolved" in vitro to bind/inhibit target molecules with high specificity (e.g., https://en.wikipedia.org/wiki/Systematic_evolution_of_ligand...)

I imagine it would not be difficult to create RNAs that inhibit some essential human enzyme and then use the RNAs for targeted assassination.

I mean, if you're doing an autopsy, you might run standard drug tests for poisons, but who's gonna screen for a highly specific RNA?


Have you seen the latest Bond movie?


No. Is that part of the plot?

edit: just read the wiki for the latest Bond movie. Apparently, there is nothing new under the Sun.

Thank you.


Does this fall into the category of research "try not to make public"? Or is this category only wishful thinking on my part.


Why make a chemical weapon when you can just tweak a virus which self-replicates?

BA.2 is even more infectious than BA.1 which is saying something, imagine an engineered BA.3 with even more spread and then make it as even more deadly. You might even be able to target it to one race or region if there is a gene specific to that area.

Always hoped the future would be Star-Trek-like but it seems all it takes is one dictator or terrorist to end the world, slowly at first but then it would double every other day and impossible to stop.


If you make it too deadly, maybe it doesn't spread as far (because the hosts die). Make it just the right amount of deadly!


Providing chemical plants with models to estimate lethality of orders could be a great use case for this work.


So, their tool will draw molecules that are good at doing harm, and that is it? No word on stabilization (which makes it safe to handle), synthesis, purification and such. I'd wager that most of these substances have at some point been on somebody's blackboard, but deemed impractical or infeasible, and then not pursued, and that's why we don't know them by name today.

Still a scary lesson though.


This is not really that worrying IMO - we already have weaponized toxins, viruses, and enough explosives to blow up the entire planet. So what if an AI can come up with something a little bit worse? It isn't the existence of these things that's stopping us all from killing each other.


The use of nuclear weapons would be... obvious: if an explosion in the 10Ktn or bigger happens, it's a nuclear weapon. There aren't enough nuclear powers to make the use of nuclear weapons plausibly deniable.

The use of chemical weapons might not be as obvious if they are slow acting. And the production of chemical weapons is much easier than that of nuclear weapons. Though, the dispersion of chemical weapons is non-trivial.

The use of biological weapons need not be obvious at all -- "it's a naturally-evolved pathogen, this happens!". The development and production of biological weapons is much easier than that of nuclear weapons. Human and animal bodies can be made to help spread biological weapons, so their dispersion can be trivial. The only thing that a bioweapons user might need ahead of time is treatment / vaccines, unless the bioweapon is weak and the real weapon is psychological.

Sobering thoughts.


I'm most worried about state actors.


Part of my job is optimizing for ARM.


.


You might be looking for this thread https://news.ycombinator.com/item?id=30699673


Is anybody searching for compounds that reduce evil intent? Something that would mellow people out without causing hallucinations. A mass tranquilizer? Not effective against lone operatives but able to be deployed against an invading army.


Uh, I don't know about that...

https://en.wikipedia.org/wiki/Serenity_(2005_film)

On a more serious note, anything that's going to affect behavior is going to have a dosage range. Too little absorbed, and there won't be enough effect. Too much, and that will harm / kill people in interesting ways.

With chemical weapons, you only worry about the bio-accumulating enough to kill your enemies. An enemy receiving more than a lethal dose isn't a problem.


I believe that's called a sedative.

Most armies aren't filled with people with evil intent; they're filled with draftees who couldn't get out of it.



The US also built bombs containing that agent, BZ, but destroyed their stockpiles in 1989.

https://en.wikipedia.org/wiki/M44_generator_cluster

https://en.wikipedia.org/wiki/M43_BZ_cluster_bomb

> The M44s relatively small production numbers were due, like all U.S. BZ munitions, to a number of shortcomings. The M44 dispensed its agent in a cloud of white, particulate smoke.[3] This was especially problematic because the white smoke was easily visible and BZ exposure was simple to prevent; a few layers of cloth over the mouth and nose are sufficient.[5] There were a number of other factors that made BZ weapons unattractive to military planners.[5] BZ had a delayed and variable rate-of-action, as well as a less than ideal "envelope-of-action".[5] In addition, BZ casualties exhibited bizarre behavior, 50 to 80 percent had to be restrained to prevent self-injury during recovery.[5] Others exhibited distinct symptoms of paranoia and mania.[5]


Of course they are. Among the "evil intent" they would reduce is any desire to rebel against your government, so you bet all big intelligence agencies are looking into it, for instance. Science fiction wrote about this decades ago.

Fortunately, there's a lot of considerations involved in deployment of anything. It's easier said than done to get something of a medical nature into a population surreptitiously, because it's hard to get a certain dose into one person without someone else getting not enough and yet someone else getting way too much. You'd have to come up with a way of delivering a medical dose in a controlled fashion and lie about it or something, you couldn't just sneak it into the food/water reliably.

Further, just because someone can name the exact complicated effect they'd like doesn't mean there's a drug that corresponds to it. Serenity, already mentioned, is a bit of silly example in my opinion because such a large effect should have been found during testing. But it does no good to pacify the population such that they'd never dream of so much as peacefully voting out the current leaders if the end result is that nobody would ever dream of so much as having enough ambition to show up to their jobs and you end up conquered by the next country over without them even trying, simply because they economically run circles around you. Or any number of other possible second-order effects. In a nutshell, it's dangerous to try to undercut evolution just to stay in power if not everywhere decides to do so equally, because you'll be evolved right out along with the society you putative rule. Evolution is alive and well and anyone who thinks it's asleep and they can screw around without consequences is liable to get a lethal wakeup call.


It's been suggested. eg. https://www.vice.com/en/article/akzyeb/link-between-lithium-...

> The report states: “These findings, which are consistent with the finding in clinical trials that lithium reduces suicide and related behaviours in people with a mood disorder, suggest that naturally occurring lithium in drinking water may have the potential to reduce the risk of suicide and may possibly help in mood stabilisation, particularly in populations with relatively high suicide rates and geographical areas with a greater range of lithium concentration in the drinking water.”


You have to reach deep into the internet to find the original recording of "PENTAGON BRIEFING ON REMOVING THE GOD GENE"

The amount of people that feel the need to "debunk" it makes it all the more mysterious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: