You're either overestimating the cost of inference or underestimating the cost of running a service like Facebook at that scale. Meta's cost of revenue (i.e. just running the service, not R&D, not marketing, not admin, none of that) was about $30B/year in 2024. In the leaked OpenAI financials from last year, their 2024 inference costs were 1/10th of that.
You're moving the goalposts, given the original complaint was not about research costs but about the marginal cost of serving additional users...
I guess you'd be surprised to find out that Meta's R&D costs are an order of magnitude higher than OpenAI's training + research costs? ($45B in 2024, vs. about $5B for OpenAI according to the leaked financials.)
Meta has a massively profitable social media business with an impenetrable network effect, so they're using that to subsidize the research. Whether that's a good decision or not is above my paygrade, but it's sustainable until something changes with the social media market.
I don't know what "moving the goalposts" means. Why were the goalposts there in the first place? The interesting questions here are whether OpenAI can sustain their current cost model long-term, and whether the revenue stream is sustainable without the costs. We'll see, I guess! It's fascinating.
I mean, the GP made a point about "per-user costs" that I believe was false, so that was the specific thing I was commenting on. Steering the discussion to a totally different topic of research costs doesn't help us reach closure on that point. It's basically new objections being thrown at the wall, and none being scraped off.
I think what you're not realizing is that OpenAI already has the kind of consumer-facing business that makes Google and Meta hundreds of billions of revenue a year. They have the product, they have the consumer mindshare and usage. All they are missing is the monetization part. And they're doing that at a vastly lower cost basis than Google or Meta, no matter what class of spending you measure. Their unit costs are lower, their fixed costs are lower, their R&D costs are lower.
They don't need to stop R&D to be profitable. Literally all they'd need to do is minimal ads monetization.
There's all kinds of things you can criticize the AI companies for, but the economics being unsustainable really isn't one of them. OpenAI is running a massive consumer-facing app for incredibly cheap in comparison to its peers running systems of a similar scale. It'd be way more effective to concentrate on the areas where the criticism is either obviously correct, or there's at least more uncertainty.
You keep saying we’re changing the goalposts, then you make a point that is exactly what I’m trying to address. Can OpenAI monetize without customers going elsewhere, since they have limited network effect? Can OpenAI stop spending on research to get their costs down? “Can OpenAI do this simple thing” is the whole question!
> Can OpenAI stop spending on research to get their costs down?
They do not need to. Their costs are already really low given the size and nature of their user base.
> “Can OpenAI do this simple thing” is the whole question!
There was a claim by someone else about OpenAI's unit costs being unsustainably high: I gave the data that shows they aren't. They are in fact quite low compared to those of bigtechs running comparable consumer services.
Then you said that the real problem was OpenAI's R&D costs being so high. I gave the data showing that is not the case. Their R&D costs are very low compared to those of bigtechs running comparable consumer services.
So I take it that you now agree that their unit and R&D costs are indeed low compared to the size of their user base? And the main claim is that they can't actually monetize without losing their users?
It seems hard to be totally confident about that claim either way, we'll only know once they start monetizing. But it is the case that the monetization they'd need to be profitable is going to be comparatively light. It just follows directly out of their cost structure (which is why the cost structure is interesting). They don't need to extract Facebook levels of money out of each user to be profitable. They can keep the ad volumes low and the ad formats inconspicuous to start with, and then boil the frog over a decade.
Like, somebody in the comments for this post said that ChatGPT has recently started showing affiliate links (clearly separated from the answer) for queries about buying products. I hadn't heard about it before now, but that is obvious place to start from: high commissions, high click through rates, and it's the use case where the largest proportion of users will like having the ads rather than annoyed by them.
So it seems that we'll find out sooner rather than later. But I'd be willing to bet money that there won't be any exodus of users from OpenAI due to ads.
Instead you'll see a slow ratchet effect: as OpenAI increases their level of ad-based monetization for ChatGPT, the less popular chatbots will follow a step or two behind. Basically let OpenAI establish the norms for frequency and norms and take the minimal heat from it, but not try to become some kind of anti-ad champions promising free service with no ads in perpetuity.
The reason I expect this is that we haven't seen it happen in other similar businesses. Nobody tried to for example make a search engine with no monetization. They might have tried e.g. making search engines that promised no personalized ad targeting, but nobody tried just completely disowning the entire business model.
You're right, I was underestimating the cost of running Facebook! $30B spent / ~3B users = ~$10 per user per year. I'd thought it would be closer to 10¢.
Do you know why it's so expensive? I'd thought serving html would be cheaper, particularly at Facebook's scale. Does the $30B include the cost of human content moderators? I also guess Facebook does a lot of video now, do you think that's it?
Also, even still, $10 per user has got to be an order of magnitude less than what OpenAI is spending on its free users, no?
> Do you know why it's so expensive? I'd thought serving html would be cheaper, particularly at Facebook's scale.
I don't know about Facebook specifically, but in general people underestimate the amount of stuff that needs to happen for a consumer-facing app of that scale. It's not just "serving html".
There are going to be thousands of teams with job functions to run thousands of services or workflows doing something incredibly obscure but that's necessary for some regulatory, commercial or operational reason. (Yes, moderation would be one of those functions).
> Also, even still, $10 per user has got to be an order of magnitude less than what OpenAI is spending on its free users, no?
No. OpenAI's inference costs in 2024 were a few billion (IIRC there are two conflicting reports about the leaked financials, one setting the inference costs at $2B/year, the other at $4B/year). That's the inference costs for both their paid subscription users, API users, and free consumer users. And at the time they were reported to have 500M monthly active users.
Even if we make the most extreme possible assumptions for all the degrees of freedom (all costs can be assigned to the free users rather than the paid ones, the higher number for total inference spend, monthly users == annual users), the cost per free user would still be at most $8/year.