How would one prove the opposite? That a human actually understands anything about the game and isn't reacting to the state of the game to suggest the next move? I'm not saying AI as it exists today understands, I'm just saying this "understanding" metric isn't a good metric unless it works in reverse.
I don't think we can with a game. Games are a progressive sequence of states with permitted transitions defined by the rules; they are inherently reactive. The only way to prove understanding is to ask things like, "Why did you make that move?", or maybe more specifically, "Why was that move the one that best maximizes your chances of winning?" I'm not sure AlphaGo could answer that question.
Basically, you need to ask questions that require meta-cognition, like, "What does Mary think about you?" That requires:
* Understanding of yourself as an entity.
* Understanding of Mary as another entity, with its own state.
* The capability to use previous interactions to approximate that other entity state.