Hacker News new | past | comments | ask | show | jobs | submit login

But there is a clear line of reasoning? AI has to be made with some kind of goal. Most goals don’t line up with humanity’s interests, and the AI will pursue them to the extreme as it’s designed for optimise aggressively for it. AI is easier to improve than biology, so once it gets good enough it will exponentially improve itself, as that allows it to achieve its goal better. Most solutions to this problem don’t actually work, we’ve yet to find a good one. It seems extremely short sighted to never consider any future situations that aren’t currently happening but seem likely to: https://xkcd.com/2278/

There is a good real world example of how optimising for a goal aggressively can make the world unliveable for some species, which is humans destroying the environment for our economic growth. But at least there we do care about it and are trying to change it, because, among other reasons, we need the environment to survive ourselves. An AI wouldn’t necessarily need that, so it would treat us more like we treat bacteria or something, only keeping us around if necessary or if we don’t get in the way (and we will get in the way)

It might sound silly but there is some solid reasoning behind it




Let me rephrase: a clear line of reasoning that isn't an Old Testament god with some Tron lines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: