Hacker News new | past | comments | ask | show | jobs | submit login

I agree that superintelligence could bring enormous benefits to humanity but the risks are very high as well. They are in fact existential risks, as detailed in the book Superintelligence by Bostrom.

That is why we need to invest much more research efforts on Friendly AI and trustworthy intelligent systems. People should consider contribute to MIRI (https://intelligence.org/) where Yudkowsky, who helped pioneer this line of research, works as a senior fellow.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: