Right. The point isn't that there's an actionable strategy we know of right now that would give a superhuman AI world domination.
It's that we're not sure there isn't. People who first discover the problem tend to quickly come up with reasons the AI would fail, but whenever you examine those reasons more deeply, you usually find that they're not as bulletproof as we'd like.
It's like playing a novel chess form against an advanced chess engine, with a large pawn advantage. Maybe the advantage is enough to beat the massive skill gap, but until you've played it's hard to guess how much margin is enough.
It's that we're not sure there isn't. People who first discover the problem tend to quickly come up with reasons the AI would fail, but whenever you examine those reasons more deeply, you usually find that they're not as bulletproof as we'd like.
It's like playing a novel chess form against an advanced chess engine, with a large pawn advantage. Maybe the advantage is enough to beat the massive skill gap, but until you've played it's hard to guess how much margin is enough.