I wouldn't even be surprised if they were losing money on paying ChatGPT users on inference compute alone, and that isn't even factoring in the development of new models.
There was an interesting article here (can't find the link unfortunately) that was arguing that model training costs should be accounted for as operating costs, not investments, since last year's model is essentially a total write-off, and to stay competitive, an AI company needs to continue training newer frontier models essentially continuously.
Training costs scale to infinite users making them a perfect moat even if they need to keep updating it. Success would be 10-100x current users at which point training costs at the current scale just don’t matter.
Really their biggest risk is total compute costs falling too quickly or poor management.
Not everyone is going to pay 20$/month, but an optimistic trajectory is they largely replace search engines while acting as a back end for a huge number of companies.
I don’t think it’s very likely, but think of it like an auction. In a room with 100 people 99 of them should think the winner overpaid. In general most people should feel a given startup was overvalued and only looking back will some of these deals look like a good investment.
WSJ reported today that ChatGPT has 250M weekly users. 10x that would be nearly the the majority of internet users. 100x that would be significantly more than the population of Earth.
> I wouldn't even be surprised if they were losing money on paying ChatGPT users on inference compute alone
I'd be surprised it that was the case. How many tokens is the average user going through? I'd be surprised if the avg user even hit a 1m tokens much less 20m.
o1 is about $2-$4 per message over the API. I'm probably costing OpenAI less than 24hrs after my subscription renewal each month.
Voice mode is around $0.25 per minute via API. I don't use that much, but 3 minutes ago per day would already exceed the cost of a ChatGPT Plus subscription by quite a bit.
I’m not sure I understand this, sorry. I see GPT-4o at $3.75 per million input tokens and $10 per million output tokens, on OpenAI’s pricing page.
That’s expensive and I can’t see how they can run Copilot on the standard API pricing. But, it makes a message (one interaction?) lower cost than $4 to me.
Thought of this way makes AI companies remarkably similar to Bitcoin mining companies which always just barely stay ahead of the difficulty increases and often fail.
I wouldn't even be surprised if they were losing money on paying ChatGPT users on inference compute alone, and that isn't even factoring in the development of new models.
There was an interesting article here (can't find the link unfortunately) that was arguing that model training costs should be accounted for as operating costs, not investments, since last year's model is essentially a total write-off, and to stay competitive, an AI company needs to continue training newer frontier models essentially continuously.