> Then he makes the point that this requires embodiment, that they have a physical body in space
He makes even a stronger curious claim: That "This realization is often called the “embodiment problem” and most researchers in AI now agree that intelligence and embodiment are tightly coupled issues."
That makes it sound like it is a consensus between researchers, that strong AI requires a robotic body. Which I doubt.
So do I. But I'm mindful of the flawed assumptions made by the best minds in symbolic AI from the 1950's to the 1980's that AI required no more than raw facts and logic. Let's not repeat that mistake.
I'm intrigued by the potential of deep nets to 'simulate' the bases for symbolic grounding using something like thought vectors (TVs), especially if they can be revised and augmented through personal experience. Bootstrapping one's knowledgebase using someone else's rich set of TVs may go a long way for an AI to become grounded vicariously.
He makes even a stronger curious claim: That "This realization is often called the “embodiment problem” and most researchers in AI now agree that intelligence and embodiment are tightly coupled issues."
That makes it sound like it is a consensus between researchers, that strong AI requires a robotic body. Which I doubt.