There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.
"and we didn't see anything" is not justified at all.
Meta absolutely has (or at least had) a word class industry AI lab and has published a ton of great work and open source models (granted their LLM open source stuff failed to keep up with chinese models in 2024/2025 ; their other open source stuff for thins like segmentation don't get enough credit though). Yann's main role was Chief AI Scientist, not any sort of product role, and as far as I can tell he did a great job building up and leading a research group within Meta.
He deserved a lot of credit for pushing Meta to very open to publishing research and open sourcing models trained on large scale data.
Just as one example, Meta (together with NYU) just published "Beyond Language Modeling: An Exploration of Multimodal Pretraining" (https://arxiv.org/pdf/2603.03276) which has a ton of large-experiment backed insights.
Yann did seem to end up with a bit of an inflated ego, but I still consider him a great research lead. Context: I did a PhD focused on AI, and Meta's group had a similar pedigree as Google AI/Deepmind as far as places to go do an internship or go to after graduation.
For instance, under Yann's direction Meta FAIR produced the ESM protein sequence model, which is less hyped than AlphaFold, but has been incredibly influential. They achieved great performance without using multiple alignments as an input/inductive bias. This is incredibly important for large classes of proteins where multiple alignments are pretty much noise.
> Creating a startup has to be about a product. When you raise 1B, investors are expecting returns, not papers.
Speaking of returns - Apple absolutely fucked Meta ads with the privacy controls, which trashed ad performance, revenue and share price. Meta turned things around using AI, with Yann as the lead researcher. Are you willing to give him credit for that? Revenue is now greater than pre-Apple-data-lockdown
Apple has allowed Facebook, TikTok etc. to track users across devices AND device resets via the iCloud Keychain API.
When you log into FB on any account on any device, then install FB on a new device, or even after you erase the device, they know it's you even before you log in. Because the info is tied to your Apple iCloud account.
And there's no way for users to see or delete what data other companies have stored and linked to your Apple ID via that API.
It's been like this for at least 5 years and nobody seems to care.
None that I found. You can test it right now yourself. Install FB, log in, delete FB, reinstall FB. Your previous login info will be there.
That would be fine if users could SEE what has been stored and DELETE it WITHOUT going through the app and trusting it to show you everything honestly.
What's even worse is that it silently persists across DEVICE reinstalls.
Erase and reset your iPhone/iPad. Sign into the same iCloud account. Reinstall FB. Your login info will still be there.
Buy a new iPhone/iPad. Sign into the same iCloud account. Reinstall FB. Your login info will still be there.
>> but he had access to many more resources in Meta, and we didn't see anything
> I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.
You were criticising his output at Facebook, though, but he was in the research group at facebook, not a product group, so it seems like we did actually see lots of things?
They're expecting what you promised them when they handed over the money. That is "more money" for most investors but that isn't the sole universal human objective. Money has to serve an instrumental purpose and if one of your purposes is something that can't currently be achieved, simply getting more money won't help. You need to give that money to some venture that might actually be able to achieve it. I have no doubt there are at least a few very rich people out there who just have sci-fi nerd dreams and want to see someone go to Mars, go to Jupiter, discover alien life, rebuild dinosaurs, or create a truly autonomous entirely new form of artificial life just to see if they can. If it makes money, great. If it doesn't, what else was I going to do? Die with $60 billion in the bank instead of $40 billion?
> There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.
> I sincerely wish we will see more competition
I really wish we don't, science isn't markets.
> Understanding world through videos
The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.
Do not keep bad results in context. You have to purge them to prevent them from effecting the next output. LLMs deceptively capable, but they don’t respond like a person. You can’t count on implicit context. You can’t count on parts of the implicit context having more weight than others.
Is it a troll? Even if we just ignore Llama, Meta invented and released so many foundational research and open source code. I would say that the computer vision field would be years behind if Meta didn't publish some core research like DETR or MAE.
Most folks get paid a lot more in a corporate job than tinkering at home - using the 'follow the money' logic it would make sense they would produce their most inspired works as 9-5 full stack engineers.
But often passion and freedom to explore are often more important than resources
For a hot minute Meta had a top 3 LLM and open sourced the whole thing, even with LeCunn's reservations around the technology.
At the same time Meta spat out huge breakthroughs in:
- 3d model generation
- Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
- A whole new class of world modeling techniques (JEPAs)
> - Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
If it was a breakthrough, why did Meta acquire Wang and his company? I'm genuinely curious.
Wang fits the profile of a possible successor ceo for meta.
Young, hit it big early, hit the ai book early straight out of college. Obviously not woke (just look at his public statements).
Unfotunately the dude knows very little about ai or ml research. He's just another wealthy grifter.
At this point decision making at Meta is based on Zuckerberg's vibes, and i suspect the emperor has no clothes.
I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took
Lecun introduced backprop for deep learning back in 1989
Hinton published about contrastive divergance in next token prediction in 2002
Alexnet was 2012
Word2vec was 2013
Seq2seq was 2014
AiAYN was 2017
UnicornAI was 2019
Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
If his ideas had real substance, we would have seen substantial results by now.
He introduced I-JEPA in 2023, so almost three years ago at this point.
If he still hasn’t produced anything truly meaningful after all these years at Meta, when is that supposed to happen? Yann LeCun has been at Facebook/Meta since December 2013.
Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.
> If his ideas had real substance, we would have seen substantial results by now
This is naive. Like saying if backprop had any real substance, it would have had results within 10 years of its publication in 1989
> Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.
Again. Those resources are important. But one resource being ignored is time. Try baking a turkey at 300 for 4 hours veruss at 900 for 1 hour and see how edible each one is
Backprop kept producing wins. That bought it time.
“Wait longer” is not a blank check. In 2026, with Meta-scale talent, data, and compute, serious ideas should show strong intermediate results, not just theory.
Time is necessary, but it is not evidence. More compute does not replace insight, but it does speed up falsification.
So no, skepticism is not naive. If a research program still cannot point to a clear empirical advantage after years, “it just needs more time” stops sounding like science and starts sounding like insulation from the scoreboard.
llama models pushed the envelope for a while, and having them "open-weight" allowed a lot of tinkering. I would say that most of fine tuned evolved from work on top of llama models.
FAIR was founded in 2015 and Llama's first release was in 2023. Musk co-founded OpenAI in 2015 but no reasonable person credits ChatGPT in 2022 to him.
In an interview, Yann mentioned that one reason he left Meta was that they were very focused on LLMs and he no longer believed LLMs were the path forward to reaching AGI.
There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.