Hacker News new | past | comments | ask | show | jobs | submit login

We can probably all agree than an AGI should be able to form questions, or more generally seek out information that it needs to figure out the answer in some form and way.

Not only are there no LLMs in existence today can do this without explicit action mapping, but the mechanism for storing that piece of information would rely on doing a large number of training runs for transfer learning to retain that information, and we humans don't actually work like that.




> we humans don't actually work like that

That is probably not a good criterion to decide whether something is intelligent or not.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: