Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem with human intelligence is saying 'if a human can do it, it is intelligence' doesn't break down the problem space correctly. It also leaves potential gaps where a system could have a form of intelligence humans do not leading to humans misjudging that systems capabilities abd that could lead to disaster (common AI risk scenario).


Reminds me of the book “Other Minds”, where the author discusses consciousness in octopuses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: