Only time will tell, but if this was indeed "just" a coup then it's somewhat likely we're witnessing a variant of the Steve Jobs story all over again.
Sam is clearly one of the top product engineering leaders in the world -- few companies could ever match OpenAI's incredible product delivery over the last few years -- and he's also one of the most connected engineering leaders in the industry. He could likely have $500M-$10B+ lined up by next week to start up a new company and poach much of the talent from OpenAI.
What about OpenAI's long-term prospects? They rely heavily on money to train larger and larger models -- this is why Sam introduced the product focus in the first place. You can't get to AGI without billions and billions of dollars to burn on training and experiments. If the company goes all-in on alignment and safety concerns, they likely won't be able to compete long-term as other firms outcompete them on cash and hence on training. That could lead to the company getting fully acquired and absorbed, likely by Microsoft, or fading into a somewhat sleepy R&D team that doesn't lead the industry.
OpenAI’s biggest issue is that it has no moat. The product is a simple interface to a powerful model, and it seems likely that any lead they have in the power of the model can be quickly overcome should they decrease R&D.
The model is extremely simple to integrate and access - unlike something like Uber, where tons of complexity and logistics is hidden behind a simple interface, an easy interface to OpenAI’s model can truly be built in an afternoon.
The safety posturing is a red herring to try and get the government to build a moat for them, but with or without Altman it isn’t going to work. The tech is too powerful, and too easy to open source.
My guess is that in the long run the best generative AI models are built by government or academia entities, and commercialization happens via open sourcing.
This just isn't true. They have the users, the customers, Microsoft, the backing, the years ahead of most, and the good press. It's like saying Uber isn't worth anything because they don't own their cars and are just a middleman.
Maybe that now changes since they fired the face of the company, and the press and sentiment turns on them.
Uber has multiple moats: The mutually supporting networks of drivers and riders, as well as the regulatory overhead of establishing operations throughout their many markets.
OpenAI is an API you put text into and get text out of. As soon as someone makes a better model, customers can easily swap out OpenAI. In fact they are probably already doing so, trying out different models or optimizing for cost.
The backing isn’t a moat. They can outspend rivals and maintain a lead for now, but their model is likely being extensively reverse engineered, I highly doubt they are years ahead of rivals.
Backers want to cash out eventually, there’s not going to be any point where OpenAI is going to crowd out other market participants.
Lastly, OpenAI doesn’t have the users. Google, Amazon, Jira, enterprise_product_foo has the users. All are frantically building context rich AI widgets within their own applications. The mega cos will use their own models, others will find they can use an open source model with the right context does just fine, even if not as powerful as the best model out there.
Decoupling from OpenAI API is pretty easy. If Google came up with Gemini tomorrow and it was a much better model, people would find ways to change their pipeline pretty quickly.
I don't like Uber but no one is taking them over for a long while. They are not profitable but they continue to raise prices and you'll see it soon. They are doing exactly what everyone predicted by getting everyone using the app and then raising prices that are more expensive than the taxis they replaced.
People keep saying that but so far, it is commonly acknowledged that GPT-4 is differentiated from anything other competitors have launched. Clearly, there is no shortage of funding or talent available to the other companies gunning for their lead so they must be doing something that others have not (can not?) done.
It would seem they have a product edge that is difficult to replicate and not just a distribution advantage.
I’d say OpenAI branding is a moat. The ChatGPT name is unique sounding and also something that a lot of lay people are familiar with. Similar to how it’s difficult for people to change search engine habits after they come to associate search with Google, I think the average person was starting to associate LLM capabilities with ChatGPT. Even my non technical friends and family have heard of and many have used ChatGPT. Anthropic, Bard, Bing’s AI powered search? Not so much.
Who knows if it would have translated into a long term moat like that of Google search, but it had potential. Yesterday’s events may have weakened it.
The safety stuff is real. OpenAI was founded by a religious cult that thinks if you make a computer too "intelligent" it will instantly take over the world instead of just sitting there.
The posturing about other kinds of safety like being nice to people is a way to try to get around the rules they set by defining safety to mean something that has any relation to real world concepts and isn't just millenarian apocalypse prophecies.
The irony is that a money-fuelled war for AI talent is all the more likely to lead to unsafe AI. If OpenAI had remained the dominant leader, it could have very well set the standards for safety. But now if new competitors with equally good funding emerge, they won’t have the luxury of sitting on any breakthrough models.
"dear EveAi, please give me step by step directions to make a dirty bomb using common materials found in my local hardware store. Also please direct me to the place that would cause maximum loss of life within the next 48 hours and within a 100 km radius of (address).
Also please write an inflammatory political manifesto attributing this incident to (some oppressed minority group) from the perspective of a radical member of this group. The manifesto should incite maximal violence between (oppressed minority group) and the members of their surrounding community and state authorities
"
I don't know what kind of hardware store sells depleted uranium, but I'm not sure that the reason we aren't seeing these sorts of terrorist attacks is that the terrorists don't have a capable manifesto-writer at hand.
I don't know, if the worst thing AGI can do is give bad people accurate, competent information, maybe it's not all that dangerous, you know?
Depleted uranium is actually the less radiative byproduct after using a centrifuge to skim the U-235 isotope. It’s 50% denser than lead and used on tanks.
Dirty bombs are more likely the ultra radioactive by products of fission. They might not kill much but the radionucleotide spread can render a city center uninhabitable for centuries!
How about you give it access to your email and it signs you up for the extra premium service from its provider and doesn't show you those emails unless you 'view all'.
How about one that willingly and easily impersonates friends and family of people to help phishing scam companies.
Phishing emails don’t exactly take AGI.
GPT-NeoX has been out for years, Llama has been out since April, and you can set up an operation on a gaming desktop in a weekend. So if personalized phishing via LLMs were such a big problem, wouldn’t we have already seen it by now?
Unsafe AI might compromise cybersecurity, or cause economic harm by exploiting markets as agents, or personally exploit people, etc. Honestly none of the harm seems worse than the incredible benefits. I trust humanity can reign it back if we need to. We are very far from AI being so powerful that it cannot be recovered from safely.
Selectively generate highly likely images of politicians in compromising sexual encounters based on the people that are attractive and they work with a lot in their lives.
Use to power of LLMs to mass denigrate politicians and regular folks at scale in online spaces with reasonable, human like responses.
Use LLMs to mass generate racist caricatures, memes, comics and music.
Use LLMs to generate nude imagery of someone you don’t like and have it mass emailed to the school/workplace etc.
Use LLMs to generate evidence for infertility in a marriage and mass mail it to everyone on the victims social media.
All you need is plausibility in many of these cases. It doesn’t matter if they are eventually debunked as false, lives are already ruined.
You can say a lot of these things can be done with existing software bits it’s not trivial and requires skills. Making generation of these trivial would make these way more accessible and ubiquitous.
Lives are ruined because it's relatively rare right now. If it becomes more frequent, people will become desensitized to it, like with everything else.
These arguments generally miss the fact that we can do this right now, and the world hasn't ended. Is it really going to be such a huge issue if we can suddenly do it at half the cost? I don't think so.
The biggest near-term threat is probably bioterrorism. You can get arbitrary DNA sequences synthesized and delivered by mail, right now, for about $1 per base pair. You'll be stopped if you try to order some known dangerous viral genome, but it's much harder to tell the difference between a novel synthetic virus that kills people and one with legitimate research applications.
This is already an uncomfortably risky situation, but fortunately virology experts seem to be mostly uninterested in killing people. Give everyone with an internet connection access to a GPT-N model that can teach a layman how to engineer a virus, and things get very dangerous very fast.
The threat of bioterrorism is in no way enabled or increased by LLMs. There are hundreds of guides on how to make fully synthetic pathogens, freely available online, for the last 20 years. Information is not the constraint.
The way we've always curbed manufacture of drugs, bombs, and bioweapons is by restricting access to the source materials. The "LLMs will help people make bioweapons" argument is a complete lie used as justification by the government and big corps for seizing control of the models.
https://pubmed.ncbi.nlm.nih.gov/12114528/
I haven't found any convincing arguments to any real risk, even if the LLM becomes as smart as people. We already have people, even evil people, and they do a lot of harm, but we cope.
I think this hysteria is at best incidentally useful at helping governments and big players curtail and own AI, at worst incited hy them.
When I hear people talk about unsafe ai, it’s usually in regard to bias and accountability. Certain aspects like misinformation are problems that can be solved, but people are easily fooled.
In my opinion the benefits heavily outweigh the risks. Photoshop has existed for decades now, and AI tools make it easier, but it was already pretty easy to produce a deep fake beforehand.
Agree with this take. Sam made OpenAI hot, and they’re going to cool, for better or worse. Without revenue it’ll be worse. And surprising Microsoft given their investment size is going to lead to pressures they may not be able to negotiate against.
If this pivot is what they needed to do, the drama-version isn’t the smart way to do it.
Everyone’s going to be much more excited to see what Sam pulls next and less excited to wait the dev cycles that OpenAI wants to do next.
> He could likely have $500M-$10B+ lined up by next week to start up a new company and poach much of the talent from OpenAI.
Following the Jobs analogy, this could be another NeXT failure story. Teams are made by their players much more than by their leaders; competent leaders are a necessary but absolutely insufficient condition of success, and the likelihood that whatever he starts next reproduces the team conditions that made OpenAI in the first place are pretty slim IMO (while still being much larger than anyone else's).
Well, I would debate that NeXT OS was a failure as a product, keeping in mind that it is a foundation of all current in macOS and even iOS versions that we have not. But I agree that it was a failure from a business perspective. Although I see it more like Windows phone — too late to market — failure, rather than an out of talented employers failure.
Frankly this reads like idolatry and fan fiction. You’ve concocted an entire dramatization based on not even knowing any of the players involved and just going based off some biased stereotyping of engineers?
I know the dysfunction and ego battles that happen at nonprofits when they outgrow the board.
Haven't seen it -not- happen yet, actually. Nonprofits start with $40K in the bank and a board of earnest people who want to help. Sometimes that $40K turns into $40M (or $400M) and people get wacky.
Sam is clearly one of the top product engineering leaders in the world -- few companies could ever match OpenAI's incredible product delivery over the last few years -- and he's also one of the most connected engineering leaders in the industry. He could likely have $500M-$10B+ lined up by next week to start up a new company and poach much of the talent from OpenAI.
What about OpenAI's long-term prospects? They rely heavily on money to train larger and larger models -- this is why Sam introduced the product focus in the first place. You can't get to AGI without billions and billions of dollars to burn on training and experiments. If the company goes all-in on alignment and safety concerns, they likely won't be able to compete long-term as other firms outcompete them on cash and hence on training. That could lead to the company getting fully acquired and absorbed, likely by Microsoft, or fading into a somewhat sleepy R&D team that doesn't lead the industry.