Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They might generate 10b ARR, but they lose a lot more than that. Their paid users are a fraction of the free riders.

https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...





This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?" back in the mid 2000s. LLMs might shake out differently from the social web, but I don't think that speculating about the flexibility of demand curves is a particularly useful exercise in an industry where the marginal cost of inference capacity is measured in microcents per token. Plus, the question at hand is "will LLMs be relevant?" and not "will LLMs be massively profitable to model providers?"

Social networks finding profitability via advertising is what created the entire problem space of social media - the algorithmic timelines, the gaming, the dopamine circus, the depression, everything negative that’s come from social media has come from the revenue model, so yes, I think it’s worth being concerned about how LLMs make money, not because I’m worried they won’t, because I’m worried they Will.

I think this can't be understated. It also destroyed search. I listened to a podcast a few years ago with an early googler who talked about this very precipice in early google days. They did a lot of testing, and a lot of modeling of people's valuation of search. They figured that the average person got something like $50/yr of value out of search (I can't remember the exact number, I hope I'm not off by an order of magnitude). And that was the most they could ever realistically charge. Meanwhile, advertising for just Q4 was like 10 times the value. It meant that they knew that advertising on the platform was inevitable. They also acknowledged that it would lead to the very problem that Brin and Page wrote about in their seminal paper on search.

I see LLMs inevitably leading to the same place. There will undoubtedly be advertising baked into the models. It is too strong a financial incentive. I can only hope that an open source alternative will at least allow for a hobbled version to consume.

edit: I think this was the podcast https://freakonomics.com/podcast/is-google-getting-worse/


This is an interesting take - is my "attention" really worth several thousand a year? In that my purchasing decisions being influenced by advertising to that degree that someone is literally paying someone else for my attention ...

I wonder if instead, could I sell my "attention" instead of others profitting of it?


Yes, but your attention rapidly loses value the more that your subsequent behavior misaligns with the buyer’s desires. In other words, the ability to target unsuspecting, idle minds far exceeds the value of a willing and conscious attention seller.

Social networks will have all of those effects without any effort by the platform itself because the person with more followers has more influence so the people on the platform will do all they can to get more.

I'm not excusing the platforms for bad algorithms. Rather, I believe it's naive to think that, but for the behavior of the platform itself that things would be great and rosy.

No, they won't. The fact that nearly every person in the world can mass communicate to nearly every other person in the world is the core issue. It is not platform design.


oh, I 100% agree with this. The way the social web was monetized is the root of a lot of evil. With AI, we have an opportunity to learn from the past. I think a lesson here is "don't wait to think critically about the societal consequences of the next Big Tech Thing's business model because you have doubts about its profitability or unit economics."

> This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?" back in the mid 2000s.

The difference is that Facebook costs virtually nothing to run, at least on a per-user basis. (Sure, if you have a billion users, all of those individual rounding errors still add up somewhat.)

By contrast, if you're spending lots of money per user... well look at what happened to MoviePass!

The counterexample here might be Youtube; when it launched, streaming video was really expensive! It still is expensive too, but clearly Google has figured out the economics.


You're either overestimating the cost of inference or underestimating the cost of running a service like Facebook at that scale. Meta's cost of revenue (i.e. just running the service, not R&D, not marketing, not admin, none of that) was about $30B/year in 2024. In the leaked OpenAI financials from last year, their 2024 inference costs were 1/10th of that.

But their research costs are extremely high, and without a network effect that revenue is only safe until a better competitor emerges.

You're moving the goalposts, given the original complaint was not about research costs but about the marginal cost of serving additional users...

I guess you'd be surprised to find out that Meta's R&D costs are an order of magnitude higher than OpenAI's training + research costs? ($45B in 2024, vs. about $5B for OpenAI according to the leaked financials.)


Meta has a massively profitable social media business with an impenetrable network effect, so they're using that to subsidize the research. Whether that's a good decision or not is above my paygrade, but it's sustainable until something changes with the social media market.

I don't know what "moving the goalposts" means. Why were the goalposts there in the first place? The interesting questions here are whether OpenAI can sustain their current cost model long-term, and whether the revenue stream is sustainable without the costs. We'll see, I guess! It's fascinating.


I mean, the GP made a point about "per-user costs" that I believe was false, so that was the specific thing I was commenting on. Steering the discussion to a totally different topic of research costs doesn't help us reach closure on that point. It's basically new objections being thrown at the wall, and none being scraped off.

I think what you're not realizing is that OpenAI already has the kind of consumer-facing business that makes Google and Meta hundreds of billions of revenue a year. They have the product, they have the consumer mindshare and usage. All they are missing is the monetization part. And they're doing that at a vastly lower cost basis than Google or Meta, no matter what class of spending you measure. Their unit costs are lower, their fixed costs are lower, their R&D costs are lower.

They don't need to stop R&D to be profitable. Literally all they'd need to do is minimal ads monetization.

There's all kinds of things you can criticize the AI companies for, but the economics being unsustainable really isn't one of them. OpenAI is running a massive consumer-facing app for incredibly cheap in comparison to its peers running systems of a similar scale. It'd be way more effective to concentrate on the areas where the criticism is either obviously correct, or there's at least more uncertainty.


You keep saying we’re changing the goalposts, then you make a point that is exactly what I’m trying to address. Can OpenAI monetize without customers going elsewhere, since they have limited network effect? Can OpenAI stop spending on research to get their costs down? “Can OpenAI do this simple thing” is the whole question!

> Can OpenAI stop spending on research to get their costs down?

They do not need to. Their costs are already really low given the size and nature of their user base.

> “Can OpenAI do this simple thing” is the whole question!

There was a claim by someone else about OpenAI's unit costs being unsustainably high: I gave the data that shows they aren't. They are in fact quite low compared to those of bigtechs running comparable consumer services.

Then you said that the real problem was OpenAI's R&D costs being so high. I gave the data showing that is not the case. Their R&D costs are very low compared to those of bigtechs running comparable consumer services.

So I take it that you now agree that their unit and R&D costs are indeed low compared to the size of their user base? And the main claim is that they can't actually monetize without losing their users?

It seems hard to be totally confident about that claim either way, we'll only know once they start monetizing. But it is the case that the monetization they'd need to be profitable is going to be comparatively light. It just follows directly out of their cost structure (which is why the cost structure is interesting). They don't need to extract Facebook levels of money out of each user to be profitable. They can keep the ad volumes low and the ad formats inconspicuous to start with, and then boil the frog over a decade.

Like, somebody in the comments for this post said that ChatGPT has recently started showing affiliate links (clearly separated from the answer) for queries about buying products. I hadn't heard about it before now, but that is obvious place to start from: high commissions, high click through rates, and it's the use case where the largest proportion of users will like having the ads rather than annoyed by them.

So it seems that we'll find out sooner rather than later. But I'd be willing to bet money that there won't be any exodus of users from OpenAI due to ads.

Instead you'll see a slow ratchet effect: as OpenAI increases their level of ad-based monetization for ChatGPT, the less popular chatbots will follow a step or two behind. Basically let OpenAI establish the norms for frequency and norms and take the minimal heat from it, but not try to become some kind of anti-ad champions promising free service with no ads in perpetuity.

The reason I expect this is that we haven't seen it happen in other similar businesses. Nobody tried to for example make a search engine with no monetization. They might have tried e.g. making search engines that promised no personalized ad targeting, but nobody tried just completely disowning the entire business model.


You're right, I was underestimating the cost of running Facebook! $30B spent / ~3B users = ~$10 per user per year. I'd thought it would be closer to 10¢.

Do you know why it's so expensive? I'd thought serving html would be cheaper, particularly at Facebook's scale. Does the $30B include the cost of human content moderators? I also guess Facebook does a lot of video now, do you think that's it?

Also, even still, $10 per user has got to be an order of magnitude less than what OpenAI is spending on its free users, no?


> Do you know why it's so expensive? I'd thought serving html would be cheaper, particularly at Facebook's scale.

I don't know about Facebook specifically, but in general people underestimate the amount of stuff that needs to happen for a consumer-facing app of that scale. It's not just "serving html".

There are going to be thousands of teams with job functions to run thousands of services or workflows doing something incredibly obscure but that's necessary for some regulatory, commercial or operational reason. (Yes, moderation would be one of those functions).

> Also, even still, $10 per user has got to be an order of magnitude less than what OpenAI is spending on its free users, no?

No. OpenAI's inference costs in 2024 were a few billion (IIRC there are two conflicting reports about the leaked financials, one setting the inference costs at $2B/year, the other at $4B/year). That's the inference costs for both their paid subscription users, API users, and free consumer users. And at the time they were reported to have 500M monthly active users.

Even if we make the most extreme possible assumptions for all the degrees of freedom (all costs can be assigned to the free users rather than the paid ones, the higher number for total inference spend, monthly users == annual users), the cost per free user would still be at most $8/year.


> This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?"

The answer was, and will be ads (talk about inevitability!)

Can you imagine how miserable interacting with ad-funded models will be? Not just because of the ads they spew, but also the penny-pinching on training and inference budgets, with an eye focused solely on profitability. That is what the the future holds: consolidations, little competition, and models that do the bare-minimum, trained and operated by profit-maximizing misers, and not the unlimited intelligence AGI dream they sell.


It won’t be ads. Social media target consumers, so advertising is dominant. We all love free services and don’t mind some attraction.

AI on the other hand target businesses and consumers alike. A bank using LLM won’t get ads. Using LLM will be cost of doing business. Do you know what they means to consumers? Price for ChatGPT will go down.


> Price for ChatGPT will go down.

As will the response quality, while maintaining the same product branding. Users will accept whatever response OpenAI gives them under the "4o", "6p","9x" or whatever brand of the day, even as they ship-of-Theseus the service for higher margins. I'm yet to see an AI service with QoS guarantees, or even that the model weights & infrastructure won't be "optimized" over time to the customer's disadvantage.


>AI on the other hand target businesses and consumers alike.

Okay. So AI will be using ads for consumers and make deals with the billionaires. If window 11/12 still puts ads in what is a paid premium product, I see no optimism in thinking that a "free" chatbot will not also resort to it. Not as long as the people up top only see dollar signs and not long term longevity.

>Price for ChatGPT will go down.

Price for ChatGPT in reality, is going up in the meanwhile. This is like hoping grocery prices come down as inflation lessens. This never happens, you can only hope to be compensated more to make up for inflation.


Has any SAAS product ever reduced their subscription cost?

Does S3 count as a SaaS? Or is that too low level?

How about tarsnap? https://www.daemonology.net/blog/2014-04-02-tarsnap-price-cu...


I see a real window this time to sell your soul.

The thing about facebook/twitter/etc was that everyone knew how they achieve lock-in and build a moat (network effect), but the question was around where to source revenue.

With LLMs, we know what the revenue source is (subscription prices and ads), but the question is about the lock-in. Once each of the AI companies stops building new iterations and just offers a consistent product, how long until someone else builds the same product but charges less for it?

What people often miss is that building the LLM is actually the easy part. The hard part is getting sufficient data on which to train the LLM, which is why most companies just put ethics aside and steal and pirate as much as they can before any regulations cuts them off (if any regulations ever even do). But that same approach means that anyone else can build an LLM and train on that data, and pricing becomes a race to the bottom, if open source models don't cut them out completely.


ChatGPT also makes money via affiliate links. If you ask ChatGPT something like "what is the best airline approved cabin luggage you can buy?" you get affiliate links to Amazon and other sites. I use ChatGPT most of the time before I buy anything these days… From personal experience (I operated an app financed by affiliate links). I can tell you that this for sure generates a lot of money. My app was relatively tiny and I only got about 1% of the money I generated but that app pulled in about $50k per month.

Buying better things is one of my main use cases for GPT.


Makes you wonder whether the affiliate links are actual, valid affiliate links or just hallucinations from affiliate links it's come across in the wild

It clearly is a 100% custom UI logic implemented by OpenAI… They render the products in carrousels… They probably get a list of product and brand names from the LLM (for certain requests/responses) and render that in a separate UI after getting those affiliate links for those products… its not hard to do. Just slap on your affiliate ID to the links you found and you are done.

ahh, okay. I don't use the service, I didn't realize they had a dedicated UI for it. I assumed it was all just embedded in the text.

Yep. Remember when Amazon could never make money and we kept trying to explain they were reinvesting their earnings into R&D and nobody believed it? All the rhetoric went from "Amazon can't be profitable" to "Amazon is a monopoly" practically overnight. It's like people don't understand the explore/exploit strategy trade-off.

AWS is certainly super profitable, if the ecommerce business was standalone, would it really be such a cash-gusher?

Amazon is successful because of the insanely broad set of investments they've made - many of them compound well in a way that supports their primary business. Amazon Music isn't successful, but it makes Kindle tablets more successful. This is in contrast to Google, which makes money on ads, and everything else is a side quest. Amazon has side quests, but also has many more initiatives that create a cohesive whole from the business side.

So while I understand how it looks from a financial perspective, I think that perspective is distorted in terms of what causes those outcomes. Many of the unprofitable aspects directly support the profitable ones. Not always, though.


> LLMs might shake out differently from the social web, but I don't think that speculating about the flexibility of demand curves is a particularly useful exercise in an industry where the marginal cost of inference capacity is measured in microcents per token

That we might come to companies saying "it's not worth continuing research or training new models" seems to reinforce the OP's point, not contradict it.


The point I'm making is that, even in the extreme case where we cease all additional R&D on LLMs, what has been developed up until now has a great deal of utility and transformative power, and that utility can be delivered at scale for cheap. So, even if LLMs don't become an economic boon for the companies that enable them, the transformative effect they have and will continue to have on society is inevitable.

Edit: I believe that "LLMs transforming society is inevitable" is a much more defensible assertion than any assertion about the nature of that transformation and the resulting economic winners and losers.


>what has been developed up until now has a great deal of utility and transformative power

I think we'd be more screwed than VR if development ceased today. They are little more than toys right now who's most successsful outings are grifts, and the the most useful tools are simply aiding existing tooling (auto-correct). It is not really "intelligence" as of now.

>I believe that "LLMs transforming society is inevitable" is a much more defensible assertion

Sure. But into what? We can't just talk about change for change's sake. Look at the US in 2025 with that mentality.


No one ever doubted that Facebook would make money. It was profitable early on, never lost that much money and was definitely profitable by the time it went public.

Twitter has never been consistently profitable.

ChatGPT also has higher marginal costs than any of the software only tech companies did previously.


Well, given the answers to the former: maybe we should stop now before we end up selling even more of our data off to technocrats. Or worse, your chatbot shilling to you between prompts.

And yes these are still businesses. If they can't find profitability they will drop it like it's hot. i.e. we hit another bubble burst that tech is known to do every decade or 2. There's no free money anymore to carry them anymore, so perfect time to burst.


what I struggle with is that the top 10 providers of LLMs all have identical* products. The services have amazing capabilities, but no real moats.

The social media applications have strong network effects, this drives a lot of their profitability.

* sure, there are differences, see the benchmarks, but from a consumer perspective, there's no meaningful differentiation


This is perfect news for consumers and terrible news for investors. Which are you?

well, I'm surprised by the sky-high valuations I see in the context of the problem I have outlined above. This is great for consumers, sure.

The AI bubble is so big that if it pops, it will have dramatic effects on the economy.


The point is that if they’re not profitable they won’t be relevant since they’re so expensive to run.

And there was never any question as to how social media would make money, everyone knew it would be ads. LLMs can’t do ads without compromising the product.


You’re not thinking evil enough. LLMs have the potential to be much more insidious about whatever it is they are shilling. Our dystopian future will feature plausibly deniable priming.

That would be hilarious tbh

I can run an LLM on my RTX3090 that is at least as useful to me in my daily life as an AAA game that would otherwise justify the cost of the hardware. This is today, which I suspect is in the upper part of the Kuznets curve for AI inference tech. I don't see a future where LLMs are too expensive to run (at least for some subset of valuable use cases) as likely.

I don't even get where this argument comes from. Pretraining is expensive, yes, but both LoRAs in diffusion models and finetunes of transformers show us that this is not the be-all, end-all; there's plenty of work being done on extensively tuning base models for cheap.

But inference? Inference is dirt cheap and keeps getting cheaper. You can run models lagging 6-12 years on consumer hardware, and by this I don't mean absolutely top-shelf specs, but more of "oh cool, turns out the {upper-range gaming GPU/Apple Silicon machine} I bought a year ago is actually great at running local {image generation/LLM inference}!" level. This is not to say you'll be able to run o3 or Opus 4 on a laptop next year - larger and more powerful models obviously require more hardware resources. But this should anchor expectations a bit.

We're measuring inference costs in multiples of gaming GPUs, so it's not an impending ecological disaster as some would like the world to believe - especially after accounting for data centers being significantly more efficient at this, with specialized hardware, near-100% utilization, countless of optimization hacks (including some underhanded ones).


> LLMs can’t do ads without compromising the product.

Spoiler: they are still going to do ads, their hand will be forced.

Sooner or later, investors are going to demand returns on the massive investments, and turn off the money faucet. There'll be consolidation, wind-downs and ads everywhere.


Well, they haven't really tried yet.

The Meta app Threads had no ads for the first year, and it was wonderful. Now it does, and its attractiveness was only reduced by 1% at most. Meta is really good at knowing the balance for how much to degrade UX by having monetization. And the amount they put in is hyper profitable.

So let's see Gemini and GPT with 1% of response content being sponsored. I doubt we'll see a user exodus and if that's enough to sustain the business, we're all good.


I was chatting with Gemini about vacation ideas and could absolutely picture a world where if it lists some hotels I might like, the businesses that bought some LLM ad space could easily show up more often than others.

Well yeah, but if it doesn’t disclose that it’s sponsored content that’s illegal.

> LLMs can’t do ads without compromising the product.

It depends on what you mean by "compromise" here but they sure can inject ads.. like make the user wait 5 seconds, show an ad, then reply..

They can delay the response times and promote "premium" plans, etc

Lots of ways to monetize, I suppose the question is: will users tolerate it?

Based on what I've seen, the answer is yes, people will tolerate anything as long as it's "free".


Sure, I’m not saying there’s no way of doing it, but a chat interface is deeply personal and not a space that ads have invaded quite just yet. If they want to show ad banners that’s one thing, but targeted diegetic ads are where the real money is. They can’t do that without compromising the chat experience imo.

To be fair, ads always compromise the product.

Social and search both compromised the product for ad revenue.

You’re not wrong, but I hope a robot spewing personalized ads at you while acting like a friend will be too unsettling for most people to tolerate

No one ever doubted that Facebook would make money. It was profitable early on, never lost that much money and was definitely profitable by the time it went public.

Twitter has never been consistently profitable


I guess all they have to do is start injecting ads into chatgpt responses. I'm sure that will be fine.

That's fixable, a gradual adjusting of the free tier will happen soon enough once they stop pumping money into it. Part of this is also a war of attrition though, who has the most money to keep a free tier the longest and attract the most people. Very familiar strategy for companies trying to gain market share.

That assumes that everyone is willing to pay for it. I don't think that's an assumption that will be true.

Consider the general research - in all, it doesn't eliminate people, but let's say it shakes out to speeding up developers 10% over all tasks. (That includes creating tickets, writing documentation, unblocking bugs, writing scripts, building proof of concepts, and more rote refactoring, but does not solve the harder problems or stop us from doing the hard work of software engineering that doesn't involve lines of code.)

That means that it's worth up to 10% of a developer's salary as a tool. And more importantly, smaller teams go faster, so it might be worth that full 10%.

Now, assume other domains end up similar - some less, some more. So, that's a large TAM.


Those that aren't willing to pay for it directly, can still use it for free, but will just have to tolerate product placement.

It very much does not assume that, only that some fraction will have become accustomed to using it to the point of not giving it up. In fact, they could probably remain profitable without a single new customer, given the number of subscribers they already have.

Absolutely, free-tier AI won’t stay "free" forever. It’s only a matter of time before advertisers start paying to have their products woven into your AI conversations. It’ll creep in quietly—maybe a helpful brand suggestion, a recommended product "just for you," or a well-timed promo in a tangential conversation. Soon enough though, you’ll wonder if your LLM genuinely likes that brand of shoes, or if it's just doing its job.

But hey, why not get ahead of the curve? With BrightlyAI™, you get powerful conversational intelligence - always on, always free. Whether you're searching for new gear, planning your next trip, or just craving dinner ideas, BrightlyAI™ brings you personalized suggestions from our curated partners—so you save time, money, and effort.

Enjoy smarter conversations, seamless offers, and a world of possibilities—powered by BrightlyAI™: "Illuminate your day. Conversation, curated."


I agree, its easily fixable by injecting ads into the responses for the free tier and probably eventually even the lower paid tiers to some extent

Literally nobody would talk to a robot that spits back ads at them

I predict this comment to enter the Dropbox/iPod hall of shame of discussion forum skeptics.

Hundreds of millions of people watch TV and listen to Radio that is at least 30% ad content per hour.

And it’s such a shit experience that netflix and podcasts destroyed those industries just by offering the same product without ads

You still have faith in society after decades of ads being spit at them.

That's pretty much what search engines are nowadays

Competition is almost guaranteed to drive price close to cost of delivery especially if they can't pay trump to ban open source, particularly chinese. With no ability to play the thiel monopoly playbook, their investors would never make their money back if not for government capture and sweet sweet taxpayer military contracts.

> especially if they can't pay trump to ban open source?

Huh? Do you mean for official government use?


OFAC against repro maintainers. Boom. z

Then cut off the free riders. Problem solved overnight.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: