Hard not to laugh, but you'd be surprised how often even other TTS companies are messing this kind of thing up. I think it has a lot to do with the data source they use for training, which evidently doesn't include a lot of currency amounts...
If you're curious what's possible with <.01% of the funding, check out https://rime.ai/. We train on data recorded in our studio and specifically include a lot of currency in our scripts for this very reason.
Why should they? Our "leaders" around the world are actively seeking to end human civilization. What point is there to work anymore? Even if humanity survives, we're gonna be knocked back to the dark ages at best. What are we all working for exactly? A "future" that's being rapidly dismantled?
> deliver increasingly powerful tools for the 500 million people who use ChatGPT every week.
Wasn't aware they'd hit a WAU count this high. Impressive, but then again at this kind of valuation you sure want to be heading towards 9-figure MAU numbers.
Do investors still not care about revenue and profits at a $300 billion valuation? Seems like the bigger problem for them is that they are losing money on the vast majority of those WAUs with no obvious route to profitability because most them will simply stop using it if forced to pay for it.
Its a gigantic bet on user stickiness in AI, and the monetizable value of AI users who don't pay for subscriptions. Aka low-end consumers vs high-end consumers.
Nvidia and AMD were low-end vs high-end. In the end Nvidia won a total victory by ditching low margin distractions like building GPUs for consoles, and focused solely on higher end PC GPUs that could dually act as accessible research chips.
No its not, its a bet on AI replacing workers. Almost all the value isn't going to be from users paying $20 or even $200 per month, but companies paying millions to billions of dollars for the api.
> It's even bigger than Salesforce, SAP, and Cisco
That's pretty incredible to think about. I recently visited SF for the first time and saw the Salesforce tower. To think that OpenAI now has a higher valuation than that is crazy to think about.
Salesforce pays to have their logo on top of the tower, much like stadium naming rights. They don't own the building. Very far from it. They lease like 12 floors total (and not even the higher ones).
They have a giant furnace that only burns hundreds. As I understand it, they lose money on every query they serve, even if you don’t consider the cost of training.
I don't think we'll ever see another startup raise so much funding every time they sneeze or wag a finger. It's getting ludicrous, really. $40B at a $300B valuation?
I have no problem with bitcoin valuation. It isnt a company intending to create revenue. It is a relatively arbitrary store of value, not an expectation of utility.
Openai would need 15 Billion in profit of 15 billion per year for a P/E of 20 with zero risk, or a 10% chance of 150 Billion profits per year.
Alphabet is about 100B earnings per year. Do you think Open AI has a 10% chance of being bigger than google? it doesnt have a moat, but I guess google doesnt either, it just dominates its market.
The price point in the long term is the cost of electricity to run your model locally and the amortized cost of buying some hardware that can run it. The fact that it isn't streamlined today doesn't mean it won't be in X years.
Investors are blindly banking on everyone perpetually going to the theater to see the talkies and missing the vision that we'll all have TVs shortly thereafter...
SoftBank sounds familiar… weren’t they a major investor in WeWork? With such poor investing acumen, why haven’t they gone bankrupt yet? Perhaps the $40 billion raised by OpenAI can only be spent on services from SoftBank’s other investments? Do investors ever restrict how their investments are spent?
Just wondering how many others in this thread perceive this quest for "AGI" as delusional at the current time, when we don't yet understand the basis of natural general intelligence in almost any way at all? It's good to shoot for the stars, but it feels like if NASA were asking for funding for a manned mission to Andromeda before even landing a man on Mars. The belief that LLMs are the ticket feels absolutely quixotic to me.
The idea that LLMs have any road to AGI is much like looking at Charles Babbage's analytical engine design and decreeing that the road to creating a mind is, to borrow a quote from Henry Babbage, merely "a question of cards and time".
Various parts of their corporate structure and previous business/financial relationships are tied to the notion of “AGI” being achieved — which is poorly defined and likely to become a semantic/legal debate more than a scientific one.
So them pushing that language in their pr/marketing activity is not a surprise and not really even meant to be scientifically meaningful.
I'm not sure people are saying LLMs are the ticket. Human intelligence has many aspects apart from language. Large language models seem to do quite well with language but are not really the thing for spatial awareness, doing maths, playing go, operating robot bodies and various other tasks. Computers can do ok with that stuff too, but not generally with language models.
If you define AGI as human level intelligence in all aspects there's a way to go yet but things seem to be getting quite close to me. I'd say the Turing test is basically passed, stuff like Woz's coffee test that a robot can go into a house, find the coffee stuff and make coffee is not there but maybe in a couple of years? With that stuff I'd say Deepmind is much closer than OpenAI.
AGI doesn't have a strict definition though so I think it would depend a lot what you see "AGI" as being.
We're well on our way to building AIs which are competent at many tasks. Assuming an AGI doesn't need to be able to do every task a human can do, and doesn't need to do all of those tasks as well as an expert human, then something which could be called AGI doesn't seem that far off at all.
I remember a time quite recently when the idea of an AI beating a good-faith interpretation of the Turing test seemed very far away. I feel like we're much closer to AGI today than we were to beating the Turing test in the late 00s.
Yep if it happens in 200 years and/or is LLM-like consider me a dullard future selves. I think the humans-feeding-data to the computer (web crawling, RLHF, etc, etc) as a substitute for sense organs as input is nowhere near enough data for AGI. Also convinced these sums of money put into neuroscience would bring about AGI quicker than any alternative.
It's all about data ingestion, and the assimilatable data for computers is tiny.
I am wondering about why all these people think AGI will care about humans like enough to send terminators for them.
Would be fun to watch billionaires pouring all their wealth into something that would make its own mind to go away and not give a damn about anything related to living things.
Not calling out any books not to spoil stuff for people - just mentioning it is not my original idea but one that I find interesting.
But general intelligence has so much more to it than this. It's so overly simplistic to say "outperform on tasks."
General intelligence means perceiving opportunities. It means devising solutions for problems nobody else noticed. It means understanding what's possible and what's valuable just from existing without being told. It means asking questions without prompting, simply for the sake of wondering and learning. It means so many things beyond "if I feed this data input to this function and hit run, can it come up with the correct output matching my expectations"?
Sure, an LLM might pass a series of problem-solving questions, but could it look up and see the motion of stars and realize they implied something about the nature of the world and start to study them, unasked, and deduce the existence of solar systems and galaxies and gravity and all the other things?
I just don't buy it. It's so reductive. They're hoping to skip over all the real understanding and achieve something great without doing the real legwork to understand the true mechanisms of intelligence by just pouring enough processing time into training. It won't work. They're missing integral mechanisms by overfocusing on the one thing they have a handle on. They don't know what they don't know, but worse, they're not trying to find out.
> It means asking questions without prompting, simply for the sake of wondering and learning.
I disagree. What you are describing is one of the possible goals of intelligence, it doesn't define intelligence itself. Many humans are not really interested in wondering and learning, but we call them intelligent.
You totally can tune LLM that it will ask tons of questions to someone who created chat: how are you today? What are you doing right now? What are your hobbies? Etc.
The market itself is also arguably a massive form of AGI that well-predates the concept. I choose this interpretation when watching Terminator (any of them really).
TBF this doesn't imply anything about OpenAI's quest to make a chatbot that gets along with people at parties.
Play it and first thing you hear is an enthusiastic "Today we’re announcing new funding - 40 bi-dollars at a 300 bi-dollar post-money valuation!" Hah.