Covid isn't what most people would call "high intelligence", yet it's a danger because it's heavily optimised for goals that are not our own.
Other people using half-baked AI can still kill you, and that doesn't have to be a chatbot as we have current examples from self-driving cars that drive themselves dangerously, and historical examples of the NATO early warning radars giving a false alarm from the moon and the soviet early warning satellites giving false alarms from reflected sunlight, but it can also be a chatbot — there are many ways that this can be deadly if you don't know better: https://news.ycombinator.com/item?id=40724283
Every software bug is an example of a computer doing exactly what it was told to do, instead of what we meant.
AI safety is about bridging the gap between optimising for what we said vs. what we meant, in a less risky manner than if covid — and while I think it doesn't matter much if covid did or didn't come from a lab leak (the potential that it did means there's an opportunity to improve bio safety there as well as in wet markets), every AI you can use is essentially a continuous supply of the mystery magic box before we know what the word "safe" even means in this context.
> Every software bug is an example of a computer doing exactly what it was told to do, instead of what we meant.
That is only as long as the person describing the behaviour as a bug is aligned with the programmer. Most of the time this is the case, but not always. For example a malicious programmer intentionally inserting a bug does in fact mean for the program to have that behavior.
Sure, but I don't think that matters as the ability to know what we are even really asking for is not as well understood for AI as for formal languages — AI can be used for subterfuge etc., but right now it's still somewhat like this old comic from 2017: https://xkcd.com/1838/
Other people using half-baked AI can still kill you, and that doesn't have to be a chatbot as we have current examples from self-driving cars that drive themselves dangerously, and historical examples of the NATO early warning radars giving a false alarm from the moon and the soviet early warning satellites giving false alarms from reflected sunlight, but it can also be a chatbot — there are many ways that this can be deadly if you don't know better: https://news.ycombinator.com/item?id=40724283
Every software bug is an example of a computer doing exactly what it was told to do, instead of what we meant.
AI safety is about bridging the gap between optimising for what we said vs. what we meant, in a less risky manner than if covid — and while I think it doesn't matter much if covid did or didn't come from a lab leak (the potential that it did means there's an opportunity to improve bio safety there as well as in wet markets), every AI you can use is essentially a continuous supply of the mystery magic box before we know what the word "safe" even means in this context.