That's not my experience. In fact it seems GPT is especially adept at metaphors and analogies, because they mirror the layout of its training data. It's very good at identifying related concepts; but try to ask it for a concept that has no relation to anything else, and it will stall out.
Q: Which kitchen metaphor does the large scale structure of our universe resemble?
> ...some scientists have suggested that the large scale structure of the universe resembles a sponge...
Q: What's an example of a recursive analogy with a layer of irony to it?
> An example of a recursive analogy with a layer of irony to it could be the statement "the world is a stage, and we are all just actors playing our parts." This analogy uses the concept of the world being a stage to describe the world itself, and then applies the same concept to individuals within the world, suggesting that their actions and experiences are just part of a larger performance. The ironic layer comes from the fact that, while the analogy may be true in some ways, it also implies that individuals do not have agency or control over their own actions and experiences, which is not necessarily the case.
Try to ask if A is same as B (although it's not much related), what you get is an attempt to approximate the sameness somehow, which is confusing. I expect something like: A is not same as B, instead A is same as C.
I think that's a really interesting point. Essentially, ChatGPT shows the loss of meaning. It can connect words to words, but it doesn't know what anything means. It has no categories to put ideas in. In fact, it doesn't even have idea. All it has is words.
So A is the same as B just as much as A is the same as C, because A, B, and C are just words with no meaning to ChatGPT.
ChatGPT seems like the endpoint of certain lines of poststructuralist philosophy. There is no meaning to the text, only words. Words relate to other words, and that is all.
That's basically the conclusion Wolfram made in this excessively long article [0] he wrote (which is nonetheless worth a read):
> The specific engineering of ChatGPT has made it quite compelling. But ultimately (at least until it can use outside tools) ChatGPT is “merely” pulling out some “coherent thread of text” from the “statistics of conventional wisdom” that it’s accumulated. But it’s amazing how human-like the results are. And as I’ve discussed, this suggests something that’s at least scientifically very important: that human language (and the patterns of thinking behind it) are somehow simpler and more “law like” in their structure than we thought. ChatGPT has implicitly discovered it. But we can potentially explicitly expose it, with semantic grammar, computational language, etc.
The issue is actually deep understanding of what's read.
One thing that human is always "better" is ability to create metaphore.
Try to get the bot to create metaphore itself to explain things. No way.