Hacker News new | past | comments | ask | show | jobs | submit login

Yep, Einstein was an expert in his field who wrote a couple of ground-breaking papers in his field. As far as I can tell no-one who is an expert in AI (or even similar fields) is worried at all about super-intelligence.



Literally everybody who is an expert in AI is worried about how to manage super-intelligence. The standard introductory text in AI by Russell and Norvig spends almost four pages discussing the existential risk that super-intelligence poses. The risk is so obvious that it was documented by IJ Good at Bletchley Park with Turing, and I wouldn't be surprised if it were identified even before that.


I'm an expert in the field and I'm not worried. It's an industrial risk like any other.


You haven't thought much about the risk of superintelligence if you think it is a typical risk. Is that compared to poorly designed children's toys or nuclear weapons?

I would go as far as to say, "humanity" as it is defined today, is doomed, it is just a matter of time.

The only question is: Will doom play out with a dramatic disaster or as a peaceful handoff/conversion from biologically natural humans to self-designed intelligence.

Either way, natural human beings will not remain dominant in a world where self-interested agents grow smarter in technologic instead of geologic timescales.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: