Humans are not AGI's. We're specialised in human survival, not general intelligence. We're pretty limited in intelligence in many ways, actually, and the environment doesn't support generality. Without a proper challenge an agent would not become super intelligent. The cost of developing such an intelligence would conflict with the need to minimise energy for survival.
No, if we were we could figure out the genetic code, or how a neural net makes its decisions. But we can't because, among other things, we have a limited working memory of 7-12 objects.
Programmers know how it is to live at the edge of the capacity of the mind to grasp the big picture. We always reinvent the wheel in the quest to make our code more grasp-able and debuggable. Why? Because it's often more complex than can be handled by the brain.
An AGI would not have such limitations. Our limitations emerged as a tradeoff between energy expenditure and ability to solve novel tasks. If we had a larger brain, or more complicated brain, we would require more resources to train. But resources are limited, we need to be smart while being scrappy.
For the record I don't think there is any general intelligence on our planet. A general intelligence would need access to all kinds of possible environments and problems. There is no such thing.
There's also the no free lunch theorem - it might not apply directly here, but it gives us a nice philosophical intuition about why AGI is impossible.
> We have dubbed the associated results NFL theorems because they demonstrate that if an algorithm performs well on a certain class of problems then it necessarily pays for that with degraded performance on the set of all remaining problems. [1]
Another argument relies on the fact that words are imprecise tools in modelling reality. Language is itself a model and as all models, it's good at some tasks and bad at other tasks. There is no perfect language for all tasks. Even if we use language we are not automatically made 'generally intelligent'. We're specialised intelligences.
We can communicate well about pretty much anything (we think, at least). That doesn't mean we possess the intellectual tool kit to handle basically any intellectual task. I think it's easy to think that we do, but that is just as easily explained by the fact that we've evolved for millions of years to be well suited to the environments we usually find ourselves in. We wouldn't refer to our bodies as general purpose bodies. They may seem that way, at times, since they've also been tuned by millions of years of evolution to be well suited to most of the environments we find ourselves in. But put our bodies in a different environment (like the ocean, or the desert, or really high altitudes) and it becomes immediately obvious that they're not general purpose, but instead a collection of various adaptations. Similarly, when you put humans in novel intellectual environments, it seems pretty clear that we're not general intelligences. After all, the math involved in balancing a checkbook is much simpler than the math involved in recognizing 3D objects, yet we do the simple task only with great difficulty, while the difficult task is done without struggle.
It's best to think about AGI as... at what point can you drop out of high school and still do well in life (or do you even need high school). It's true that it's not a survival issue, but sadly, it's not a test of "pure knowledge" either. There is a great deal of social structure, even "fluff" that is only relevant for interaction (like getting an 80's reference).