"If Dr. Good finishes an AI first, we get a good AI which protects human values. If Dr. Amoral finishes an AI first, we get an AI with no concern for humans that will probably cut short our future."
AI advanced enough to be "good" or "evil" won't be developed instantaneously, or by humans alone. We'll need an AI capable of improving itself. I believe the authors argument falls apart at this point; surely any AI able to evolve will undoubtedly evolve to the same point, regardless of it being started with the intention of doing good or evil. Whatever ultra-powerful AI we end up with is just an inevitability.
I think he's suggesting there would be a critical mass of intelligence, if there is such a thing. Humans might not survive either transition through a malevolent AI or a good one.
AI advanced enough to be "good" or "evil" won't be developed instantaneously, or by humans alone. We'll need an AI capable of improving itself. I believe the authors argument falls apart at this point; surely any AI able to evolve will undoubtedly evolve to the same point, regardless of it being started with the intention of doing good or evil. Whatever ultra-powerful AI we end up with is just an inevitability.