You have failed to provide a scenario, likely because no realistic scenario exists. Let's go through an example scenario and the hurdles you have to overcome for this to work. We'll use the "mayonnaise plant" example from a sibling comment.
1. The LLM needs to find an exploitable bug in a popular code base.
2. The LLM needs to write a reliable exploit for that bug.
3. The LLM needs to develop a worm that exploits that bug and spreads itself, opening access to the system.
4. The LLM needs to connect to systems and understand if they are of any significance (it found a mayonnaise plant!).
5. The LLM needs to understand the control protocols of their industrial control systems.
6. The LLM needs to understand how to make a dangerous composition from the ingredients it has on hand (let's pretend it can dump some industrial cleaning solution that is on standby for cleaning the tanks).
7. The LLM needs to assume such total control over this processing plant that it can disguise the traffic and not trigger a single alarm around malfunctions.
What you're vaguely hinting at is extremely high skilled labor. There are a few billion dollar businesses in those steps. I welcome you to go read up on the challenges in automated exploit generation. LLMs are nowhere close.
Now, you might rebut and say there are far simpler attacks, like phishing! Also an extremely hard problem. Try to send email in mass and not land in a spam filter, try to do the reconnaissance necessary to generate a believable login page. Try to leverage the sale guy's credentials to reach any system more meaningful than the company's Salesforce instance.
So once again, I ask, please walk me through a situation where an LLM gets anywhere close to killing even 1% of the number of people an atomic bomb could.
We are nowhere close to even a rudimentary understanding of how current LLMs actually work. All we have are low-level building blocks, and lots of philosophizing about high-level output.
Considering that, relying on some gut feeling about what LLMs "surely cannot possibly do" is reckless overconfidence. Not to mention that the next generation of LLMs, with potentially entirely new emergent properties, might be just around the corner, and the time to put safeguards into place is now, not when it's too late.
As for the scenario, all an LLM with Internet access would need to do is find a single remotely exploitable vulnerability in the Linux network stack. That would allow it to literally shut down the entire Internet, which would kill a lot more people than a single atomic bomb.
Shutting down the Internet would stop most of global trade, cripple most government operations, and trigger an economic meltdown that would make the Great Depression look insignificant by comparison, not to mention unimaginable social chaos. Even essential services like hospitals and law enforcement would be drastically impacted. Yes, they use the Internet. For many critical tasks.
I wouldn't be surprised if this led to the death of 10% of the global population in about 5 years. That's 800 million people, or around 4000 times the combined death toll of the Hiroshima and Nagasaki bombings.
The Internet is the backbone of the modern world. Short of a global nuclear war, it's hard to imagine a more catastrophic event than it suddenly becoming unavailable.
The main use of the internet is quickly passing along information. Hospitals rely on the internet primarily to handle records and financial processing. Their ability to actually treat patients would still be functional without the internet.
The loss of many of these records in the financial and governance would certainly lead to a lot of dollar losses. But dollars aren't people.
I think the consequences of a sudden, permanent shutdown of the internet would be fairly dire over the short term. Grocery stores would face inventory problems, with shortages worse than what we saw in 2020. Emergency services could be stretched thin as they deal with any unrest, while navigating their work without any internet-connected services they relied on. Energy infrastructure could experience interruptions due to the loss of connectivity.
As for economic consequences, millions of jobs would just no longer make sense, and many others would get harder and frequently much less efficient. Certain major categories of product no longer make as much sense; what's an iPhone that can't connect to the internet worth? It's just a telephone with a camera (that takes images you can't share).
It could be deadly to some degree? All it takes is for the power to go out somewhere exceedingly hot in July, as the loss of air conditioning can be deadly for the elderly. But as deadly as the nukes the US dropped on Japan? I don't see it.
> what's an iPhone that can't connect to the internet worth? It's just a telephone with a camera
Nope. It's just a camera. Because the "telephone" part relies on infrastructure that is unmaintainable without the Internet, and would probably stop functioning in a matter of days.
And boom, you're back to the 19th century, where telephones weren't a thing. Except that unlike in the 19th century, there isn't any infrastructure to make things work in that situation.
How does the hospital order medical supplies now? By paper mail? Sorry, mail can't be delivered anymore, since all postal services depend on logistic systems that in turn depend on the Internet for coordination. Also, printed catalogs haven't existed for almost 2 decades, so the hospital won't even know what supplies are available.
And either way, the supplier doesn't have anything in stock, because their entire supply chain has collapsed, because international trade isn't a thing anymore. Did I mention that GPS (and thus most of sea/air navigation) has stopped working because the ground-based infrastructure needed to keep it running depended on, you guessed it, the Internet?
Still think that won't kill more people than the atomic bombings (which killed "only" 200k)?
I guess I'm just marginally more sanguine than you are that issues related to infrastructure and logistics could be, eventually, worked around. There would still be ways to contact people over a distance, some of which would remain available even in the initial chaos. Further, there are many people still working today who worked before internet-connected-everything, so there may still be useful institutional knowledge to bring to bear on these new problems.
My point on the iPhone was about what would happen after telephony infrastructure and supply chains had recovered (however long that takes). In the immediate aftermath of the shutdown mobile phones would absolutely lose the ability to make or receive calls. But again, I expect this to recover somewhat after a few months.
Your point on hospitals not being able to order supplies is a good one, regardless. Without blood supplies or dialysis equipment (or a bunch of other things), people would absolutely die. Perhaps those first weeks would be deadlier than I anticipated.
> 1. The LLM needs to find an exploitable bug in a popular code base.
That's trivial in the industry context. A chunk of the stack is running decade old stuff.
> 2. The LLM needs to write a reliable exploit for that bug.
That's what Metasploit is for, isn't it?
> 3. The LLM needs to develop a worm that exploits that bug and spreads itself, opening access to the system.
See 2. if that's your strategy, but there are others. Such as, plant operations staff using random LLMs-as-a-service in their work (despite corporate saying not to do that; but it's not like the bosses don't do it either).
> 4. The LLM needs to connect to systems and understand if they are of any significance (it found a mayonnaise plant!).
Not hard at GPT-4 level, will only be easier. If it starts with a goal of doing something bad at scale, it will recognize a mayonnaise plant as an eligible approach, should it "cross its mind".
> 5. The LLM needs to understand the control protocols of their industrial control systems.
All documented and already part of the training set. I know that one for a fact, because I've been "chatting" with GPT-4 about some nuances of industrial protocol, and getting it to write me example code.
Industrial stuff may be closed-source and expensive, but the documentation and specs and marketing blurbs are to be found publicly. Most underlying protocols are public and well-documented (ish). The recent push for IIoT / "Industry 4.0" is actually trying to replace most of that proprietary stuff, secure by obscurity, with web-adjacent tech - exactly the thing that LLMs know best, because out sheer openness and popularity of everything webshit.
> 6. The LLM needs to understand how to make a dangerous composition from the ingredients it has on hand (let's pretend it can dump some industrial cleaning solution that is on standby for cleaning the tanks).
I bet that, should you work around "I'm sorry, as a large language model trained by Open AI, I'm afraid I can't do that" issue, GPT-4 will happily give you 5+ different ways to make mayonnaise lethal. It's not rocket science - it's industrial food production. A chunk of the processes there exist to ensure the product won't develop chemical or bacterial contamination.
> 7. The LLM needs to assume such total control over this processing plant that it can disguise the traffic and not trigger a single alarm around malfunctions.
Nah, it just needs to spoof some PLC outputs somewhere, or a data feed that goes to the model-predictive control. There's a risk of triggering alarms somewhere, and hopefully most of the naive approaches will get caught in the late lab testing / QC stages, but still - you can get far without triggering anything but maybe a dashboard warning about some outlier values, that plant operators brush off as more bugs in the industrial software.
That's if your goal is to weaponize mayonnaise. If you want to blow up the plant, well... skip step 7.
> There are a few billion dollar businesses in those steps.
If you saw how some of those businesses work, you'd be surprised we're all still alive.
> walk me through a situation where an LLM gets anywhere close to killing even 1% of the number of people an atomic bomb could.
Nobody is saying that GPT-4 can do it on its own. But to the extent that GPT-4 or a model more advanced than it already captures some essence of generalized thinking, and given the creativity people are showing in constructing increasingly complicated chains of LLMs and classical tools to extend both the breadth and the precision of that generalized thinking ability, plus giving them every possible tool in the world, it's not hard to imagine those systems getting capable enough to screw stuff at scale.
The ultimate argument is that atomic bombs, bioweapons and even climate crisis were all done thanks to intelligent agents. Intelligence is what gives rise to those threats, so by itself, it's more dangerous than all of them.
Also:
> What you're vaguely hinting at is extremely high skilled labor. (...) I welcome you to go read up on the challenges in automated exploit generation. LLMs are nowhere close.
We've only been dealing with AI models capable of basic coding tasks for less than a year. We've barely even begun to apply optimization pressure to this capability. So even as LLMs are "nowhere close" today, I wouldn't take the bet that they will remain "nowhere close" a year from now - there's absurd amount of money and interest invested into making them capable of this, by proxy of making them capable of software dev, or high-level thinking in general.
Mere intelligence does not generate an atomic bomb. Einstein signed the letter to the president, and Oppenheimer was the man helming the project, but the actual construction of the atomic bomb required massive amounts of labor and resource gathering that was not under Einstein nor Oppenheimer's jurisdiction. The physical ability of hypothetical AGIs or LLMs to command such resources is rather low.
The article talks about LLMs being hooked up to the Internet. Big difference.
If you really can't imagine a scenario where this might lead to people dying, you're not trying hard enough.