Ok, I'll take a shot: "it would follow that anyone tackling AGI would have some experience applying machine learning competitively in some public space".
No, that would absolutely not follow. (I'm a pretty good devil's advocate, but I can't with this one.)
And given AGI would come from some completely new breakthrough not related to the current practice of "machine learning", competitions may be completely moot. They may be great for finding the nice increments in the state of the art of machine learning, but they are unlikely to help much with AGI.
I could even imagine a stumbling AGI being very stupid compared to just about any machine learning solution thrown at it - yet being undeniably AGI. Like a dog not being very good at DOTA, Star Craft or Chess, yet it undeniably possesses some kind of general intelligence.
> And given AGI would come from some completely new breakthrough not related to the current practice of "machine learning"
I’m not so sure of that. Intuitively AGI feels like being able to generalize and automatize what is already done in specialized problems, like having a meta program that that orchestrate and apply specialized subsystems, and adapt existing one. If playing Go, Starcraft, Speech recognition, Computer vision are already of the same building blocks, it feels like having a meta program that‘s just trained to recognize the type of problem and route it to the appropriate subsystem with some parameters tweaks is a path to AGI. In the dog example you don’t even need to have subsystem that are that better than humans individually.
Edit: my point is I feel like AGI is the interface and orchestration between specialized subsystems we already know how to create. Trying to train a big network like generalizing Alpha Go is a dead-end, but having simpler sub networks ready to be trained at a specific problem seems feasible. Much like the brain is at first seen like a big network, but in practice there are specialized areas. The key is how are these networks interfaced and which information they exchange to self adapt. Maybe these interfaces themselves are sub networks specialized in the problem of interfacing and “tuning hyperparameters”.
In short: I think when we’ll figure out how to automate Kaggle competitions (recognize the pattern of the problem, then instantiate and train the relevant subsystem) we’ll be a good step forward AGI. We don’t need better performance e.g. in image recognition, just how to figure orchestration.
Turning a blind eye to existing knowledge may result in reinventing things that already exist. Nobody expects students to follow the same concepts as their teachers, the point is just to leverage existing knowledge.
> I could even imagine a stumbling AGI being very stupid compared to just about any machine learning solution thrown at it - yet being undeniably AGI. Like a dog not being very good at DOTA, Star Craft or Chess, yet it undeniably possesses some kind of general intelligence.
People have debated whether animals are intelligent for ages. This is another type of problem, how to define intelligence. The most famous attempt in recent times is the Turing test.
No, that would absolutely not follow. (I'm a pretty good devil's advocate, but I can't with this one.)
And given AGI would come from some completely new breakthrough not related to the current practice of "machine learning", competitions may be completely moot. They may be great for finding the nice increments in the state of the art of machine learning, but they are unlikely to help much with AGI.
I could even imagine a stumbling AGI being very stupid compared to just about any machine learning solution thrown at it - yet being undeniably AGI. Like a dog not being very good at DOTA, Star Craft or Chess, yet it undeniably possesses some kind of general intelligence.