I've done a lot of work on information extraction with these over the last year, and if accuracy counts, then a) GPT4 is in a league of its own, and b) GPT4 still isn't really very good. They may not have a "moat," but they're still the only player in town when quality is critical.
For now. The quality of competitors has been improving considerably when we look at our own in-house analysis for various use cases we have.
It looks like GPT4 has approached an asymptote in quality (at least within a compute time window where they remain even marginally cost effective). Others are just catching up to that goalpost.
Even GPT4 suffers from the same problems intrinsic to all LLMs-- in real world use, hallucinations become a problem, they have a very difficult time with temporal relevance (i.e identifying when something is out of date), and they are horrifically bad at any kind of qualitative judgement.
> They may not have a "moat," but they're still the only player in town when quality is critical
Their initial moat was built with ChatGPT, which was launched less than a year ago and was surpassed by competitors in less than 6 months. Their current GPT4 is less than 6 months old. While your statement may be true for now, I don’t expect it will hold longer term. They have name recognition advantage, but so did AOL.
Input some document, get a JSON with all the fields. It requires understanding a lot about world entities, fields and values to parse free form into structured form. Also works on screens, you can extract information from an app, for example to run an AI bot on top of the app.
My use cases were extracting stuff like this from scientific papers: "what were the interventions in this study, what outcomes were measured, and what was the treatment effect of each intervention on each outcome?"
They're trying to build a moat out of government regulation (aka rent-seeking). In May, their CEO went before congress and asked for it. Soon after, the media started churning out AI fearmongering articles. I expect regulation bills will be proposed soon.
Yup - right now, they're throwing all their development efforts at trying to enhance "safety." The goal is to do it without significantly degrading model performance for the majority of use cases.
When the inevitable regulation starts rolling out, OpenAI expects their lobotomized models to outperform competing lobotomized models, because they'll have a huge head start (and probably will have had a hand in guiding the legislation as well).
There's a good chance that the fear of China taking over the AI space worldwide may end up being stronger than OpenAI's push for regulation.
Politicians know the later is real, and they also know that the "Terminator" fear is unfounded, at least for now. At least in the US, I doubt very much Congress will cater to OpenAI. They know it's going to undermine the prospects of the entire AI industry in the US and its long term competitivity in the international arena.
They still have some of the best research talend in the world working there. And they have Microsoft providing them basically free, "almost" unlimited compute resources. It's not what they have now, but their ability to make new stuff that's their "moat".
"ability to make new stuff", ha! Let's actually see the new stuff, then I'll believe it.
I've seen too many first movers fail to differentiate themselves and eventually fall behind their competition to really believe that resources and talent alone constitute a viable moat.
Some would argue they can even be a hindrance, if they render the company complacent and risk-averse.
OpenAI better have some earth shattering thing up its sleeve because I don't understand what their moat is.