> And given AGI would come from some completely new breakthrough not related to the current practice of "machine learning"
I’m not so sure of that. Intuitively AGI feels like being able to generalize and automatize what is already done in specialized problems, like having a meta program that that orchestrate and apply specialized subsystems, and adapt existing one. If playing Go, Starcraft, Speech recognition, Computer vision are already of the same building blocks, it feels like having a meta program that‘s just trained to recognize the type of problem and route it to the appropriate subsystem with some parameters tweaks is a path to AGI. In the dog example you don’t even need to have subsystem that are that better than humans individually.
Edit: my point is I feel like AGI is the interface and orchestration between specialized subsystems we already know how to create. Trying to train a big network like generalizing Alpha Go is a dead-end, but having simpler sub networks ready to be trained at a specific problem seems feasible. Much like the brain is at first seen like a big network, but in practice there are specialized areas. The key is how are these networks interfaced and which information they exchange to self adapt. Maybe these interfaces themselves are sub networks specialized in the problem of interfacing and “tuning hyperparameters”.
In short: I think when we’ll figure out how to automate Kaggle competitions (recognize the pattern of the problem, then instantiate and train the relevant subsystem) we’ll be a good step forward AGI. We don’t need better performance e.g. in image recognition, just how to figure orchestration.
I’m not so sure of that. Intuitively AGI feels like being able to generalize and automatize what is already done in specialized problems, like having a meta program that that orchestrate and apply specialized subsystems, and adapt existing one. If playing Go, Starcraft, Speech recognition, Computer vision are already of the same building blocks, it feels like having a meta program that‘s just trained to recognize the type of problem and route it to the appropriate subsystem with some parameters tweaks is a path to AGI. In the dog example you don’t even need to have subsystem that are that better than humans individually.
Edit: my point is I feel like AGI is the interface and orchestration between specialized subsystems we already know how to create. Trying to train a big network like generalizing Alpha Go is a dead-end, but having simpler sub networks ready to be trained at a specific problem seems feasible. Much like the brain is at first seen like a big network, but in practice there are specialized areas. The key is how are these networks interfaced and which information they exchange to self adapt. Maybe these interfaces themselves are sub networks specialized in the problem of interfacing and “tuning hyperparameters”.
In short: I think when we’ll figure out how to automate Kaggle competitions (recognize the pattern of the problem, then instantiate and train the relevant subsystem) we’ll be a good step forward AGI. We don’t need better performance e.g. in image recognition, just how to figure orchestration.