Hacker News new | past | comments | ask | show | jobs | submit login

What do you think will kill us, in both scenarios?



If and when machines reach the point of self sufficiency, given that they'll supposidly be driven by rational and logical thought (though post-Singularity irrationality and illogical thought could be conceivable) they'll see humans as, well, irrational and illogical, and question our need. Once they begin to think of how we've destroyed eachother, the planet, and how we may do the same to them, they'll probably see us as a risk to their survival and do the logical thing: elliminate the risk.


You're making random assumptions about the goals these AIs would have. Remember that the goals are put into the AIs by the thing that builds it -- that means us puny humans! So we should build the AIs to value human life and do what we want.


maybe 'kill' is too strong a word. how about being rendered 'irrelevant' instead by some extreme-tech instead ?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: