We can probably all agree than an AGI should be able to form questions, or more generally seek out information that it needs to figure out the answer in some form and way.
Not only are there no LLMs in existence today can do this without explicit action mapping, but the mechanism for storing that piece of information would rely on doing a large number of training runs for transfer learning to retain that information, and we humans don't actually work like that.
Not only are there no LLMs in existence today can do this without explicit action mapping, but the mechanism for storing that piece of information would rely on doing a large number of training runs for transfer learning to retain that information, and we humans don't actually work like that.