Hacker News new | past | comments | ask | show | jobs | submit login

> That's not worth billions. It's definitely not worth trillions.

That is a problem for the VC’s that bet wrong, not for the world at large.

The models exist now and they’ll keep being used, regardless of whether a bunch of rich guys lost a bunch of money.




Their ongoing operation is quite expensive, so even that is not assured.


My ongoing operation is a MacBook pro that costs pennies worth of electricity.


Where are you getting this from? Outside of o3, every AI provider's API is super cheap, with most productive queries I do coming in under 2c. We have no reason to believe any of them are selling API requests at a loss. I think <2c per query hardly counts as "quite expensive".


The reasoning people have for them selling API requests at a loss is simply their financial statements. Anthropic burned $3B this year. ChatGPT lost $5B. Microsoft has spent $19B on AI and Google has spent close to $50B. Given that revenue for the market leader ChatGPT is $3.7B, it's safe to say that they're losing massive amounts of money.

These companies are heavily subsidized by investors and their cloud service providers (like Microsoft and Google) in an attempt to gain market share. It might actually work - but this situation, where a product is sold under cost to drum up usage and build market share, with the intent to gain a monopoly and raise prices later on - is sort of the definition of a bubble, and is exactly how the mobile app bubble, the dot-com bubble, and previous AI bubbles have played out.


Are the training costs (CapEx) and inference costs (OpEx) being lumped together?


Not sure if it matters at this point. There will need to be many more rounds of CapEx to realize the promises that have been put forth about these models.


The implication would be that those API requests are being sold at a loss. Amodei wrote in January that Claude 3.5 Sonnet was trained for only a few $10Ms, but Anthropic has been losing billions.


That would be a killer for the current and near future generations of LLM as a business. If they are having to pay many times in compute what they are able to get for the API use (due to open models being near comparable?), then you definitely can't "make up for it in volume".


> they’ll keep being used

How? I get that many devs like using them for writing code. Personally I don't, but maybe someday someone will invent a UX for this that I don't despise, and I could be convinced.

So what? That's a tiny market. Where in the landscape of b2b and b2c software do LLMs actually find market fit? Do you have even one example? All the ideas I've heard so far are either science fiction (just wait any day now we'll be able to...) or just garbage (natural language queries instead of SQL). What is this shit for?


Anecdotally, almost every day I’ll overhear conversations at my local coffee shop of non-developers gushing about how much ChatGPT has revolutionized their work: church workers for writing bulletins and sermons, small business owners for writing loan applications or questions about taxes, writers using it for proofreading, etc. And this is small town Colorado.

Not since the advent of Google have I heard people rave so much about the usefulness of a new technology.


These are not the sort of uses we need to make this thing valuable. To be worthwhile it needs to add value to existing products. Can it do that meaningfully well? If not it's nothing more than a curiosity.


Worthwhile is a hard measure.

To make money though it just needs to have a large or important audience and a means of convincing people to think, want, or do things that people with money will pay to make people think, want or do.

Ads, in other words


Can you get enough revenue from ads to pay the cost of serving LLM queries? Has anyone demonstrated this is a viable business yet?

A related question: has anyone figured out how to monetize LLM input? When a user issues a Google search query they're donating extremely valuable data to Google that can be used to target relevant ads to that user. Is anyone doing this successfully with LLM prompt text?


I bet Google is utilizing the value of the LLM input prompts with close to the same efficiency they are monetizing search. I that case, there are two questions -- 1) will LLM overtake search? and 2) can anyone beat Google at monetizing these inputs? I think the answer to both is no. Google already has a wide experience lead monetizing queries. And personally, I'd rather have a search engine that does a better job of excluding spam without having to worry whether or not it's making stuff up. Kagi has a better search than any of the LLMs (except for local results like restaurants/maps).


> Do you have even one example?

My company uses them for a fuckton of things that were previously too intractable for static logic to work (because humans are involved).

This is mostly in the realm of augmented customer support (e.g. customer says something, and the support agent immediately gets the summarized answer on their screen)

It’s nothing that can’t be done without, but when the whole problem can be simplified to “write a good prompt” a lot of use cases are suddenly within reach.

It’s a question if they’ll keep it around when they realize it doesn’t always quite work, but at least right now MS is making good money off of it.


LLMs are incredible at editing my writing. Every email I write is improved by LLMs. My executive summaries are improved by LLMs. It wont be long until every single office worker is using LLMs as an integral part of their daily stack, people just have to try it and theyll see how useful it is for writing.

Microsoft turned itself into a trillion dollar company off the back of enterprise SAAS products and LLMs are among the most useful.


> What is this shit for?

Various minor thing so far. For example I heard about ChatGPT being evaluated as a tool for providing answers for patients in therapy. ChatGPT answers were evaluated as more empathetic, more human and more aligned with guidelines of therapy than answers given by human therapists.

Providing companionship to lonely people is another potential market.

It's not as good as people at solving problems yet but it's already better than humans at bullshiting them.


Are people actually satisfied by that? I personally find "chatting" with an LLM grating and dissatisfying because it often makes very obvious and incongruous errors, and it can't reason. It has no logical abilities at all, really. I think you're really underestimating what a therapist actually does, and what human communication actually is. It's more than word patterns.

I could see this being useful in a "dark pattern" sense, but only if it's incredibly cheap, to increase the cost to the user of engaging with customer support. If you have to argue with the LLM for an hour before being connected to an actual person who can help you, then very few calls will make it to the support staff and you can therefore have a much smaller team. But that only works if you hate your users.


Subjective evaluation of "humanity" and "empathy" in responses is much less important than clinical outcome. I don't think an online chat with a nebulous entity will ever be as beneficial as interactions that can, at least occasionally, be in-person. Especially as the trust of online conversations degrade. Erosion of trust online seems like a major negative consequence of all the generative AI slop (LLM or otherwise).


Clinical outcome of humans doing therapy would be better if for some reason doing therapy worse (less according to taught guidelines) was better. But, sure, we can wait for another research or follow up. It might be true. Therapy has dismal outcomes anyways and the outcomes are mostly independent of which theoretical framework the therapy is done according to. It might be the case that the only value in therapy is human connection that AI fails to simulate. But it seem that for some people it simulates connection pretty well.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: