I'm completely a bystander, but I feel like one flag for me with current approaches is the ongoing separation between training and runtime. Robotics has been through a similar thing where you have one program that does SLAM while you teleop the robot, and you use that to map your environment, then afterward shut it down and pass the static map into a separate localization + navigation stack.
Just as robots have had to graduate to the world of continuous SLAM, navigating while building and constantly updating a map, I feel like there's a big missing piece in current AI for a system that can simultaneously act and learn, that can reflect on gaps in its own knowledge, and express curiosity in order to facilitate learning— that can ask a question out of a desire to know rather than as a party trick.
Just as robots have had to graduate to the world of continuous SLAM, navigating while building and constantly updating a map, I feel like there's a big missing piece in current AI for a system that can simultaneously act and learn, that can reflect on gaps in its own knowledge, and express curiosity in order to facilitate learning— that can ask a question out of a desire to know rather than as a party trick.