> It's referring to the fact that cats are able to do tasks like multistage planning, which he asserts current AIs are unable to do.
I don't understand this criticism at all. If I go over to ChatGPT and say "From the perspective of a cat, create a multistage plan to push a houseplant off a shelf" it will satisfy my request perfectly.
Hmmm... You didn't really explain yourself - I'm not sure I understand your point.
But guessing at what you mean - when I evaluate ChatGPT, I include all the trivial add-ons. For example, AutoGPT will create a plan like this and then execute the plan one step at a time.
I think it would be silly to evaluate ChatGPT solely as a single execution endpoint.
As I understand it a model simply predicts the next word- one word at a time. (The next token, actually, but for discussion sake we might pretend a token is identical to a word).
The model does not "plan" anything, it has no idea how a sentence will end when it starts it as it only considers what word comes next- then what word after that- then what word after that. It discovers the sentence is over when the next token turns out to be a period. It discovers it's finished it's assignment when the next token turns out to be a stop token.
So one could say the model provides the illusion of planning, but is never really planning anything other than what the next word to write is.
To use your metaphor the difference is we are discussing whether someone can invent a calculator that does new and exciting things using current calculator technology.
You can't say "who cares if it's an illusion it works for me" when the topic is whether an attempt to build a better one will work for the stated goal.
I think I've explained several times it can't plan as it only does one word at a time, your so called multistage plan document is not planned out in advance, it is generated one word at a time with no plan.
If you don't care about the technical aspects, why ask in the first place what Yann LeCun meant?
"Now I know you said it can't plan, but what if we all agree to call what it does planning? That would be very exciting for me. I can produce a parable about a calculator if that would help. LeCun says it has limitations, but what if all agree to call it omniscient and omnipotent? That would also be very exciting for me."
Look man, 5 + 5 = 10, even if it's implemented by monkeys on typewriters.
This argument we're having is a version of the Chinese Room. I've never found Searle's argument persuasive, and I truly have no interest arguing it with you.
This is the last time I will respond to you. I hope you have a nice day.
I don't think we're having an argument about the Chinese room, because as far as I know Le Cunn does not argue AI can't have "A mind, understanding or consciousness". Not have I, I simply talked about how LLMs work, as I understand them.
There's a lot of confusion about these technologies, because tech enthusiasts like to exaggerate the state of the art's capabilities. You seem to be arguing "we must turn to philosophy to show CHATGPT is smarter than it would seem" which is not terribly convincing.
I don't understand this criticism at all. If I go over to ChatGPT and say "From the perspective of a cat, create a multistage plan to push a houseplant off a shelf" it will satisfy my request perfectly.