Hacker News new | past | comments | ask | show | jobs | submit login

I have not heard your definition of AGI before. However, I suspect AIs are already self-aware: if I asked an LLM on my machine to look at the output of `top` it could probably pick out which process was itself.

Or did you mean consciousness? How would one demonstrate that an AGI is conscious? Why would we even want to build one?

My understanding is an AGI is at least as smart as a typical human in every category. That is what would be useful in any case.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: