A real intelligence would recognize that this task is better solved with an automated tool and actually do so. ChatGPT is capable of writing and executing Python code, but it doesn't occur to it to use that in cases like this.
Thanks, that was essentially the test. I've gotten into a number of disagreements with people on HN about whether LLMs are 'just' token predictors, whether they 'understand' (whatever we mean by that), whether there's a guiding intelligence, whether they're 'just' language calculators, etc.
As someone else in this thread nicely put it, the tools are being sold as a hop, skip, and jump away from AGI. They clearly aren't. ChatGPT tells us to "ask anything." I did that. There is no 'there' there with these tools. They aren't even dumb.