Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Look carefully at these goals and tell me if these are materially falsifiable. Can you imagine a test that determines whether or not a system has self consciousness?

If such a test exists we could interrogate if a system of some design might pass it, but if such a test does not exist and we cannot even imagine it then you’re talking about something that is unfalsifiable - which is another way of saying “effectively fake”.



Consciousness is not important for AGI. Being able to learn new skills, adapt to new sensors, transfer knowledge across domains, learn at all, plan, replan, achieve under specified goals and more are what’s required for AGI.

Plenty has been written about the requirements for decades now. That hasn’t changed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: