Gary Marcus would have wrote this article in all possible scenarios, unless ChatGPT 5 was literally AGI (maybe even it were, he would still have found something to attack). There is valid criticism, and there is just being a contrarian for the sake of being a contrarian.
The whole thing feels less like “Hey, this is why I think the model is bad” and more like the kind of sensationalist headline you’d read in a really trashy tabloid, something like: “ChatGPT 5 is Hot Garbage, Totally Fails, Sam Altman Crushed Beneath His Own Failure.”
Also, I have no idea why people give so much attention to what this guy has to say.
His claims were that GPT5 would be an incremental improvement at best and that LLMs are not sufficient for AGI, all while Sam Altman has claimed that AGI is just around the corner since GPT4. People pay attention to what Gary Marcus says because he’s right.
The whole thing feels less like “Hey, this is why I think the model is bad” and more like the kind of sensationalist headline you’d read in a really trashy tabloid, something like: “ChatGPT 5 is Hot Garbage, Totally Fails, Sam Altman Crushed Beneath His Own Failure.”
Also, I have no idea why people give so much attention to what this guy has to say.