Sorry, I didn't mean that LLMs are not a subset of AI. They clearly are. What they are not is equal to AI; there are things that are AI that are not LLMs.
It is obvious when I say it, but my internal language model (heh) can tell a lot of people are not thinking that way when they speak, and the latter is often more reliable than how people claim they are thinking.
I think the problem here is in a classification of what is ( I ) in the first place. For us to answer the question of what equals AI we must first answer the question of what equals human intelligence in a self consistent, logical, parsable manner.
It is obvious when I say it, but my internal language model (heh) can tell a lot of people are not thinking that way when they speak, and the latter is often more reliable than how people claim they are thinking.