Where is the actual financial modelling? This is pure speculation?
I understand being bearish and frightened of AI but this accounts for absolutely NOTHING, and especially doesn't include any projections on potential ad revenue which is likely going to be huge given their DAU and what you can extrapolate their ARPU to be based on other big tech advertisers.
> ad revenue which is likely going to be huge given their DAU and what you can extrapolate their ARPU to be based on other big tech advertisers.
Ad revenue doesn't come out of thin air. Unless budgets and TAM in the ad space increase (hint: they won't), the spend has to mostly come from cannibalizing META and Google. In that regard, I wish them luck - that will be a long and bloody battle. And both the established players can fight it longer than OAI because they have actually revenue streams and strong cash balances.
>Where is the actual financial modelling? This is pure speculation?
Every doom and gloom article about OpenAI is almost always speculation, with no actual evidence backing the claims. The issue is that people love a good "AI is going to fail" story, so it gets shot up to the front page. Unfortunately, some journalists now know that it can rake in clicks, so they will happily reduce their journalistic integrity to ride the wave.
OpenAI have signed something like $1.5tn worth of future spending deals as of the end of last year whilst making something like $13bn of _revenue_ for the year. There's no way that any of this can add up
"signed" $1.5T, or issued press releases that hint to $1.5T in synergistic, cross-collateralized theoretical future deals funded by market frenzy and investor inertia? i.e. how much of their own money has OpenAI committed?
One problem with OpenAi advertising is that users are already moving towards Gemeni, which isn't advertising.
Chatgpt is mostly worse than Gemeni too (arguably) and isn't nearly as rate limited. So they're already losing users and making their product a worse experiance than their competition.
Sure OpenAI will make some money from ads but will it be anything close to what it takes to quench the amount of money they're burning? It seems unlikely to me. They really need to be bought out by a sugar-momma who can afford to play this kind of game like MSFT.
OpenAI hit 800M weekly average users and communication to OpenAI investors from this week state:
> "Both our Weekly Active User (WAU) and Daily Active User (DAU) figures continue to produce all-time-highs (Jan 14 was the highest, Jan 13 was the second highest, etc.)"
This does not indicate that they're losing users, at all...
Ahh, maybe they're not losing users after all. I was thinking about market share as reported in several articles. I assumed if they were losing big lumps of market share they had to be losing users too, but I guess you can still grow even so.
How do we know that ad revenue will be huge? 80% of the questions that I ask can't be monetized because they're not about purchase intent. And even if they could, has OpenAI built an auction system to bid on keywords? How exactly will all this work and be streamlined in the next 18 months to the point that it could generate the revenue they need to keep with the ridiculous investment requirements in infrastructure?
The thing i keep coming back to is that an LLM backed query is so, so much more expensive than a typical web request. What kind of advertising is going to align in the value necessary to cover those costs, plus margin? Chatbots aren't YouTube, users aren't going to sit through 30 second ads, I don't think.
From what I understand of the advertising market, companies like Google and Facebook make bucketloads of ads primarily because they own so much of the vertical integration of ad markets. Meanwhile, the way OpenAI appears to be integrating ads makes it seem to me that they're positioned only to take the smallest slice of the pie--a place to hoist ads--which means they're revenue-per-user I would estimate to be a lot closer to, say, a newspaper website than the biggest of social media sites, or maybe along the lines of Twitter or Tumblr, which never posted spectacular profits.
Altman saying they are going to spend a Trillion+ is (if anything) an anti signal to what the actual financial plan looks like. He is way out front as the hype man and booster. Most of what he says is wishful thinking or an outright lie.
Sometimes these "articles" are sent out as thinly veiled "press releases" prior to an new round of investment. Sometimes someone who thinks they are a "reporter" has what they think is an "exclusive" or a "hot take". Regardless, as someone who has spent all of his career in startups...this is...business as usual. Another round of funding/financing will commence. Open AI will be fine unless investors lose confidence in AI. We won't know how it will play out until it plays out. Media outlets reporting on this are playing off the AI bubble hype for clicks. (Yes, we are in a bubble. No, nobody knows when it will pop, nor how bad it will be, ad driven company just wants more ad revenue, nothing to see here, move along.)
Langfuse is by far the best of the langs (-chain, -graph, -smith, -flow) in terms of UI/DX/integration/docs/quality.
Interesting development, announced at the same time as ClickHouse raising $400M Series D at $15B valuation.
Doesn't seem like much is changing at Langfuse - still open source, was already hosted on ClickHouse cloud, lot of collaboration existed between both teams.
I’ve never worked in big tech, but I have seen the same dynamics play out in much smaller orgs.
If you’re constantly nitpicking and expressing concerns, you become “that person” who’s constantly negative about other people’s ideas. After a while people tune out; they already know that you’ll find “problems.” We all know these people. No one really likes working with them. Thus they’re _not effective_ at what they’re trying to do.
Ultimately you mostly get credit for shipping things that work, and only rarely for preventing the mistakes of other people.
At its core, what the blog post is saying is: keep your powder dry for when it matters. Not every problem is going to make the company insolvent. Not every concern will prove correct. Pick your battles strategically.
It’s good advice no matter the size or nature of the org.
Yeah, ultimately you're paid to deliver results. Criticism is only of value to the degree that it leads to better results; there is zero value in predicting failure per se. Some people place so much value on being right that they lose sight of the actual goals (and I won't say I'm immune to this, but marriage helped). Nothing with a high upside is low risk, so as en employee you need to inherently frame all risks in terms of identifying the most likely path to succeed.
The only alternative is to advocate for inaction, but then why are they paying you? Those kind of bets can make sense for private equity investors, but not for employees, and my builder-brain just finds them dull and annoying.
Dishonesty layered on dishonesty, marketed with an arrogant smirk on top. I feel like tech culture has fully internalized the ethos of "no attention is bad attention", so having a lack of scruples and a talent for rage baiting is now seen as an advantage. It might pay off well for some people in the short term, but it's not a sustainable way to run a society.
> They can and most likely will release something that vaporises the thin moat you have built around their product.
As they should if they're doing most of the heavy lifting.
And it's not just LLM adjacent startups at risk. LLMs have enabled any random person with a claude code subscription to pole vault over your drying up moat over the course of a weekend.
LLMs by their very nature subsume software products (and services). LLM vendors are actually quite restrained - the models are close to being able to destroy the entire software industry (and I believe they will, eventually). However, at the moment, it's much more convenient to let the status quo continue, and just milk the entire industry via paid APIs and subscriptions, rather than compete with it across the board. Not to mention, there are laws that would kick in at this point.
I think the function of a company is to address limitations of a single human by distributing a task across different people and stabilized with some bureaucracy. However, if we can train models past human scales at corporation scale, there might be large efficiency gains when the entire corporation can function literally as a single organism instead of coordinating separate entities. I think the impact of this phase of AI will be really big.
> the models are close to being able to destroy the entire software industry
Are you saying this based on some insider knowledge of models being dramatically more capable internally, yet deliberately nerfed in their commercialized versions? Because I use the publicly available paid SOTA models every day and I certainly do not get the sense that their impact on the software industry is being restrained by deliberate choice but rather as a consequence of the limitations of the technology...
I don't mean the companies are hoarding more powerful models (competition prevents that) - just that the existing models already make it too easy for individuals and companies to build and maintain ad-hoc, problem-specific versions of many commercial software services they now pay for. This is the source of people asking, why haven't AI companies themselves done this to a good chunk of software world. One hypothesis is that they're all gathering data from everyone using LLMs to power their business, in order to do just that. My alternative hypothesis is that they already could start burning through the industry, competing with whole classes of existing products and services, but they purposefully don't, because charging rent from existing players is more profitable than outcompeting them.
I believe there has never been a better time to do a micro SaaS. For 200$ a month you can use Ruby on Rails, Laravel, Adonisjs, or some other boring full stack framework, to vibe code most things you need. Only a few things need to be truly original in any given SaaS product, while most of it is just the same old stuff that is amendable to vibe coding.
This means the smaller niches become viable. You can be a smaller team targeting a smaller niche and still be able to pull of a full SaaS product profitably. Before it would just be too costly.
And as you say, the smaller niches just aren't interesting to the big companies.
When some new tech comes along that unlocks big new possibilities - like PCs, the Internet, Smartphones (and now Agentic Chat AI) - the often recited wisdom is that you should look at what open green fields are now accessible that weren't before, and you should run there as fast as possible to stake your claim. Well there are now a lot of small pastures available that it are also profitable to go for as a small team/individual.
I think that feeling is what you get when you read too much Hacker News :) There are, in fact, more startups being created now than ever. And I promise you, people said the same thing about going up against IBM back in the day...
That entire website may very well be AI generated including the blog post itself.
We need a new movement "Prompt Posts", instead of generating an entire blog post, share the prompt instead and save us having to skim 1000 words of AI content.
a16z paved the path for a large chunk of the industry even if engineers don't like to admit it.
Even something like this would have been impossible to realise unless you had an engineering background and were hands-on:
> Ben didn’t think Hadoop was going to be the winning architecture. It was notoriously difficult to program and manage, and Ben thought it was poorly suited for the future: every step in a MapReduce computation wrote intermediate results to disk, which made it painfully slow for iterative workloads like machine learning.
I understand being bearish and frightened of AI but this accounts for absolutely NOTHING, and especially doesn't include any projections on potential ad revenue which is likely going to be huge given their DAU and what you can extrapolate their ARPU to be based on other big tech advertisers.