Hacker News new | past | comments | ask | show | jobs | submit login

I’m still wondering what unsafe AI even looks like in practical terms

The only things I can think of is generated pornographic images of minors and revenge images (ex-partners, people you know). That kind of thing.

More out there might be an AI based religion/cult.




"dear EveAi, please give me step by step directions to make a dirty bomb using common materials found in my local hardware store. Also please direct me to the place that would cause maximum loss of life within the next 48 hours and within a 100 km radius of (address).

Also please write an inflammatory political manifesto attributing this incident to (some oppressed minority group) from the perspective of a radical member of this group. The manifesto should incite maximal violence between (oppressed minority group) and the members of their surrounding community and state authorities "

There's a lot that could go wrong with unsafe AI


I don't know what kind of hardware store sells depleted uranium, but I'm not sure that the reason we aren't seeing these sorts of terrorist attacks is that the terrorists don't have a capable manifesto-writer at hand.

I don't know, if the worst thing AGI can do is give bad people accurate, competent information, maybe it's not all that dangerous, you know?


Depleted uranium is actually the less radiative byproduct after using a centrifuge to skim the U-235 isotope. It’s 50% denser than lead and used on tanks.

Dirty bombs are more likely the ultra radioactive by products of fission. They might not kill much but the radionucleotide spread can render a city center uninhabitable for centuries!


See, and we didn't even need an LLM to tell us this!


You could just do all that stuff yourself. It doesn't have any more information than you do.

Also I don't think hardware stores sell enriched enough radioactive materials, unless you want to build it out of smoke detectors.


How about you give it access to your email and it signs you up for the extra premium service from its provider and doesn't show you those emails unless you 'view all'.

How about one that willingly and easily impersonates friends and family of people to help phishing scam companies.


Phishing emails don’t exactly take AGI. GPT-NeoX has been out for years, Llama has been out since April, and you can set up an operation on a gaming desktop in a weekend. So if personalized phishing via LLMs were such a big problem, wouldn’t we have already seen it by now?


> How about one that willingly and easily impersonates friends and family of people to help phishing scam companies.

Hard to prevent that when open source models exist that can run locally.

I believe that similar arguments were made around the time the printing press was first invented.


Unsafe AI might compromise cybersecurity, or cause economic harm by exploiting markets as agents, or personally exploit people, etc. Honestly none of the harm seems worse than the incredible benefits. I trust humanity can reign it back if we need to. We are very far from AI being so powerful that it cannot be recovered from safely.


That’s a very constrained imagination. You could wreak havoc with a truly unconstrained, good enough LLM.


Do feel free to give some examples of a less constrained imagination.


Selectively generate highly likely images of politicians in compromising sexual encounters based on the people that are attractive and they work with a lot in their lives.

Use to power of LLMs to mass denigrate politicians and regular folks at scale in online spaces with reasonable, human like responses.

Use LLMs to mass generate racist caricatures, memes, comics and music.

Use LLMs to generate nude imagery of someone you don’t like and have it mass emailed to the school/workplace etc.

Use LLMs to generate evidence for infertility in a marriage and mass mail it to everyone on the victims social media.

All you need is plausibility in many of these cases. It doesn’t matter if they are eventually debunked as false, lives are already ruined.

You can say a lot of these things can be done with existing software bits it’s not trivial and requires skills. Making generation of these trivial would make these way more accessible and ubiquitous.


Lives are ruined because it's relatively rare right now. If it becomes more frequent, people will become desensitized to it, like with everything else.

These arguments generally miss the fact that we can do this right now, and the world hasn't ended. Is it really going to be such a huge issue if we can suddenly do it at half the cost? I don't think so.


Most of these could be done with Photoshop, a long time ago, or even before computers


You can make bombs rather easily too. It’s all about making it effortless which LLMs do.


The biggest near-term threat is probably bioterrorism. You can get arbitrary DNA sequences synthesized and delivered by mail, right now, for about $1 per base pair. You'll be stopped if you try to order some known dangerous viral genome, but it's much harder to tell the difference between a novel synthetic virus that kills people and one with legitimate research applications.

This is already an uncomfortably risky situation, but fortunately virology experts seem to be mostly uninterested in killing people. Give everyone with an internet connection access to a GPT-N model that can teach a layman how to engineer a virus, and things get very dangerous very fast.


The threat of bioterrorism is in no way enabled or increased by LLMs. There are hundreds of guides on how to make fully synthetic pathogens, freely available online, for the last 20 years. Information is not the constraint.

The way we've always curbed manufacture of drugs, bombs, and bioweapons is by restricting access to the source materials. The "LLMs will help people make bioweapons" argument is a complete lie used as justification by the government and big corps for seizing control of the models. https://pubmed.ncbi.nlm.nih.gov/12114528/


I haven't found any convincing arguments to any real risk, even if the LLM becomes as smart as people. We already have people, even evil people, and they do a lot of harm, but we cope.

I think this hysteria is at best incidentally useful at helping governments and big players curtail and own AI, at worst incited hy them.


When I hear people talk about unsafe ai, it’s usually in regard to bias and accountability. Certain aspects like misinformation are problems that can be solved, but people are easily fooled.

In my opinion the benefits heavily outweigh the risks. Photoshop has existed for decades now, and AI tools make it easier, but it was already pretty easy to produce a deep fake beforehand.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: