The problem is AI is not intelligent at all. Those problems were looking at a conscious intelligence and trying to explore what might happen. When chat gpt can be fooled into conversations even a child knows is bizarre, we are talking about a non intelligent statistical model.
I'm still waiting for the day when someone puts one of these language models inside of a platform with constant sensor input (cameras, microphones, touch sensors), and a way to manipulate outside environment (robot arm, possibly self propelled).
It's hard to tell if something is intelligent when it's trapped in a box and the only input it has is a few lines of text.