Hacker News new | past | comments | ask | show | jobs | submit login

"If Dr. Good finishes an AI first, we get a good AI which protects human values. If Dr. Amoral finishes an AI first, we get an AI with no concern for humans that will probably cut short our future."

AI advanced enough to be "good" or "evil" won't be developed instantaneously, or by humans alone. We'll need an AI capable of improving itself. I believe the authors argument falls apart at this point; surely any AI able to evolve will undoubtedly evolve to the same point, regardless of it being started with the intention of doing good or evil. Whatever ultra-powerful AI we end up with is just an inevitability.




Why would it undoubtedly evolve to the same point?


I think he's suggesting there would be a critical mass of intelligence, if there is such a thing. Humans might not survive either transition through a malevolent AI or a good one.

I guess we'll find out, eh?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: