In addition to very open publishing, Google recently released Flan-UL2 open source which is an order of magnitude more impressive than anything OpenAI has ever open sourced.
I agree, it is a bizarre world where the "organization that launched as a not for profit called OpenAI" is considerably less open than Google.
> Google recently released Flan-UL2 open source which is an order of magnitude more impressive than anything OpenAI has ever open sourced.
CLIP has been extremely influential and is still an impressive model.
Personally, I have found Whisper to be very impressive.
I didn't even see any news around the release of Flan-UL2, and I pay significantly more attention to machine learning than the average person. Searching for more info about Flan-UL2, it seems somewhat interesting, but I don't know if I find it "an order of magnitude more impressive" than CLIP or Whisper. Certainly, they are completely different types of models, so it is hard to compare them.
If Flan-UL2 is as good as one twitter account was hyping it up to be, then I'm surprised it hasn't been covered to the same extent as Meta's LLaMA. Flan-UL2 seems to have gotten a total of 3 upvotes on HN. But, there is no shortage of hype in the world of ML models, so I take that twitter account's report of Flan-UL2 with a (large) grain of salt. I'll definitely be looking around for more info on it.
Maybe they're embarrassed to admit they recycled click farms to increase training data quality and that's it?
A bit like this fictional janitor guy who said "just put more computers to make it better" before papers on unexpected emergent comprehension when when scaled started appearing.
People may criticize Google because they don't release the weights or an API, but at least they publish papers, which allows the field to progress.