Hacker Newsnew | past | comments | ask | show | jobs | submit | garymarcus's commentslogin

Why it won’t reach AGI, ever, and how it might destroy the economy,

Deep analysis by a leading skeptic.


If you think AI is “smart” or “PhD level” or that it “has an IQ of 120”, take five minutes to read my latest newsletter (link below), as I challenge ChatGPT to the incredibly demanding task of drawing a map of major port cities with above average income.

The results aren’t pretty. 0/5, no two maps alike.

“Smart” means understanding abstract concepts and combining them well, not just retrieving and analogizing in shoddy ways.

No way could a system this is wonky actually get a PhD in geography. Or economics. Or much of anything else.


> If you think AI is “smart” or “PhD level” or that it “has an IQ of 120”...

It's not there yet, it's still learning™, but a lot of progress in AI has happened recently, which I would give them that.

However, as you point out in your newsletter already, there are also lots of misleading and dubious claims alongside too much hype in the hopes to raise VC capital which comes with the overpromising in AI as well.

One of them is the true meaning of "AGI" (right now it is starting to look like a scam), since there are several conflicting definitions directly from those who benefit.

What do you think it truly means given your observations?


“‘It’s still learning’ is a misnomer. The model isn’t learning—we are. LLMs are static after training; all improvement comes from human iteration in the outer loop: fine-tuning, prompt engineering, tool integration, retrieval. Until the outer loop itself becomes autonomous and self-improving, we’re nowhere near AGI. Current hype confuses capability with agency.


This is a really surface level investigation that just happens to exclusively use a part of the current multimodal model that is really bad at the task presented, that is demanding precise images, graphs, charts, etc. Try asking for tables of data or matplotlib code to generate the same visualizations and it will typically do far better. That said if you actually use even the latest models day to day you'll inevitably run into even stupider mistakes/hallucinations than this, but the point you're trying to make is undermined by appearing to have picked up chatgpt with the exclusive goal of making a substack post dunking on it.


I very much appreciate all the ways we're improving our ideas of what "smart" means.

I wouldn't call LLMs "smart" either, but with a different definition than the one you use here: to me, at the moment, "smart" means being able to learn efficiently, with few examples needed to master a new challenge.

This may not be sufficient, but it does avoid any circular arguments about if any given model would have any "understanding" at all.


I don't believe ChatGPT has an IQ of 120, and after reading the linked article, I don't think the author does either.


No arguing with anything, but the (link below) doesn't exist.


The only thing LLM does really well is statistical prediction.

As should be expected, sometimes it predicts correctly and sometimes it doesn't.

It's kinda like FSD mode in a Tesla. If you're not willing to bet your life on it (and why would you?), it's really not all that useful.


For those wanting some background, rather than just wanting to vent:

1. Here is evaluation of my recent predictions: https://garymarcus.substack.com/p/25-ai-predictions-for-2025...

2. Here is annotated evaluation, slightly dated, considering almost line by line, of the original Deep Learning is Hitting a Wall paper: https://garymarcus.substack.com/p/two-years-later-deep-learn...

Ask yourself how much has really changed in the intervening year?


It's funny, I see myself as basically just a pretty unabashed AI believer, but when I look at your predictions, I don't really have any core disagreements.

I know you as like the #1 AI skeptic (no offense), but like when I see points like "16. Less than 10% of the work force will be replaced by AI. Probably less than 5%.", that's something that seems OPTIMISTIC about AI capabilities to me. 5% of all jobs being automated would be HUGE, and it's something that we're up in the air about.

Same with "AI “Agents” will be endlessly hyped throughout 2025 but far from reliable, except possibly in very narrow use cases." - even the very existence of agents who are reliable in very narrow use cases is crazy impressive! When I was in college 5 years ago for Computer Science, this would sound like something that would take a decade of work for one giant tech conglomerate for ONE agentic task. Now its like a year off for one less giant tech conglomerate, for many possible agentic tasks.

So I guess it's just a matter of perspective of how impressive you see or don't see these advances.

I will say, I do disagree with your comment sentiment right here where you say "Ask yourself how much has really changed in the intervening year?".

I think the o1 paradigm has been crazy impressive. There was much debate over whether scaling up models would be enough. But now we have an entirely new system which has unlocked crazy reasoning capabilities.


exactly. and the counterarguments boil down to “na na” and hope.


god these arguments are empty, personal and without any substance whatsoever


Yes, sorry, the internet unfortunately works this way and even though we are trying everything we know to dampen this stuff on Hacker News in favor of more thoughtful conversation, it seems we can only tweak the margins somewhat.


Show me the goalposts I have moved, with actual quotes to prove it. nobody ever has when I have asked.

Aso consider eg the bets I have made with Miles Brundage (and offered to Musk(, with money where I have backed up my views.

good summary of predictions i made - mostly correct – is here: https://open.substack.com/pub/garymarcus/p/25-ai-predictions...


you obviously never actually read the paper; you should.


Gary I respect you - will do!


because i correctly foresaw quite a lot. you should actually read the paper.


100% - there has not been any solid theoretical argument whatsoever (beyond some confusions about scaling that we can now see were incorrect).


the common thread is rationality and an aversion to disinformation.


It seems like you mainly switched to anti-neural-network punditry. Have you thought about trying to go back to research or is that not viable for you anymore?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: