Hacker News new | past | comments | ask | show | jobs | submit login

I'm always unsure what people like you actually believe regarding existential AI risk.

Do you think it's just impossible to make something intelligent that runs in a computer? That intelligence will automatically mean it will share our values? That it's not possible to get anything smarter than a smart human?

Or do you simply believe that's a very long way away (centuries) and there's no point in thinking about it yet?




I don’t see how we could make some artificial intelligence that, like in some Hollywood movie, can create robots with arms and kill all of humanity. There’s a physical component to it. How would it create factories to build all this?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: