I genuinely laughed. Oh no somebody please save me from a chatbot that's hallucinating half the time!
Joke aside, of course OpenAI is gonna play up how "intelligent" its models are. But it's evident that there's only so much data and compute that you can throw at a machine to make it smart.
Covid isn't what most people would call "high intelligence", yet it's a danger because it's heavily optimised for goals that are not our own.
Other people using half-baked AI can still kill you, and that doesn't have to be a chatbot as we have current examples from self-driving cars that drive themselves dangerously, and historical examples of the NATO early warning radars giving a false alarm from the moon and the soviet early warning satellites giving false alarms from reflected sunlight, but it can also be a chatbot — there are many ways that this can be deadly if you don't know better: https://news.ycombinator.com/item?id=40724283
Every software bug is an example of a computer doing exactly what it was told to do, instead of what we meant.
AI safety is about bridging the gap between optimising for what we said vs. what we meant, in a less risky manner than if covid — and while I think it doesn't matter much if covid did or didn't come from a lab leak (the potential that it did means there's an opportunity to improve bio safety there as well as in wet markets), every AI you can use is essentially a continuous supply of the mystery magic box before we know what the word "safe" even means in this context.
> Every software bug is an example of a computer doing exactly what it was told to do, instead of what we meant.
That is only as long as the person describing the behaviour as a bug is aligned with the programmer. Most of the time this is the case, but not always. For example a malicious programmer intentionally inserting a bug does in fact mean for the program to have that behavior.
Sure, but I don't think that matters as the ability to know what we are even really asking for is not as well understood for AI as for formal languages — AI can be used for subterfuge etc., but right now it's still somewhat like this old comic from 2017: https://xkcd.com/1838/
You make a good point so I should clarify that Iterated Amplification is an idea that was first proposed as a technique for AGI safety, but happens to also be applicable to LLM safety. I study AGI safety which is why I recognized the technique.
Thanks, added to my collection of AGI-pessimistic comments that I encounter here, and that I aim to revisit in, say, 20 years. I'm not sure I will be able to say: "you were wrong!". But I do expect so.
I genuinely laughed. Oh no somebody please save me from a chatbot that's hallucinating half the time!
Joke aside, of course OpenAI is gonna play up how "intelligent" its models are. But it's evident that there's only so much data and compute that you can throw at a machine to make it smart.