Hacker News new | past | comments | ask | show | jobs | submit login

> they’ll keep being used

How? I get that many devs like using them for writing code. Personally I don't, but maybe someday someone will invent a UX for this that I don't despise, and I could be convinced.

So what? That's a tiny market. Where in the landscape of b2b and b2c software do LLMs actually find market fit? Do you have even one example? All the ideas I've heard so far are either science fiction (just wait any day now we'll be able to...) or just garbage (natural language queries instead of SQL). What is this shit for?




Anecdotally, almost every day I’ll overhear conversations at my local coffee shop of non-developers gushing about how much ChatGPT has revolutionized their work: church workers for writing bulletins and sermons, small business owners for writing loan applications or questions about taxes, writers using it for proofreading, etc. And this is small town Colorado.

Not since the advent of Google have I heard people rave so much about the usefulness of a new technology.


These are not the sort of uses we need to make this thing valuable. To be worthwhile it needs to add value to existing products. Can it do that meaningfully well? If not it's nothing more than a curiosity.


Worthwhile is a hard measure.

To make money though it just needs to have a large or important audience and a means of convincing people to think, want, or do things that people with money will pay to make people think, want or do.

Ads, in other words


Can you get enough revenue from ads to pay the cost of serving LLM queries? Has anyone demonstrated this is a viable business yet?

A related question: has anyone figured out how to monetize LLM input? When a user issues a Google search query they're donating extremely valuable data to Google that can be used to target relevant ads to that user. Is anyone doing this successfully with LLM prompt text?


I bet Google is utilizing the value of the LLM input prompts with close to the same efficiency they are monetizing search. I that case, there are two questions -- 1) will LLM overtake search? and 2) can anyone beat Google at monetizing these inputs? I think the answer to both is no. Google already has a wide experience lead monetizing queries. And personally, I'd rather have a search engine that does a better job of excluding spam without having to worry whether or not it's making stuff up. Kagi has a better search than any of the LLMs (except for local results like restaurants/maps).


> Do you have even one example?

My company uses them for a fuckton of things that were previously too intractable for static logic to work (because humans are involved).

This is mostly in the realm of augmented customer support (e.g. customer says something, and the support agent immediately gets the summarized answer on their screen)

It’s nothing that can’t be done without, but when the whole problem can be simplified to “write a good prompt” a lot of use cases are suddenly within reach.

It’s a question if they’ll keep it around when they realize it doesn’t always quite work, but at least right now MS is making good money off of it.


LLMs are incredible at editing my writing. Every email I write is improved by LLMs. My executive summaries are improved by LLMs. It wont be long until every single office worker is using LLMs as an integral part of their daily stack, people just have to try it and theyll see how useful it is for writing.

Microsoft turned itself into a trillion dollar company off the back of enterprise SAAS products and LLMs are among the most useful.


> What is this shit for?

Various minor thing so far. For example I heard about ChatGPT being evaluated as a tool for providing answers for patients in therapy. ChatGPT answers were evaluated as more empathetic, more human and more aligned with guidelines of therapy than answers given by human therapists.

Providing companionship to lonely people is another potential market.

It's not as good as people at solving problems yet but it's already better than humans at bullshiting them.


Are people actually satisfied by that? I personally find "chatting" with an LLM grating and dissatisfying because it often makes very obvious and incongruous errors, and it can't reason. It has no logical abilities at all, really. I think you're really underestimating what a therapist actually does, and what human communication actually is. It's more than word patterns.

I could see this being useful in a "dark pattern" sense, but only if it's incredibly cheap, to increase the cost to the user of engaging with customer support. If you have to argue with the LLM for an hour before being connected to an actual person who can help you, then very few calls will make it to the support staff and you can therefore have a much smaller team. But that only works if you hate your users.


Subjective evaluation of "humanity" and "empathy" in responses is much less important than clinical outcome. I don't think an online chat with a nebulous entity will ever be as beneficial as interactions that can, at least occasionally, be in-person. Especially as the trust of online conversations degrade. Erosion of trust online seems like a major negative consequence of all the generative AI slop (LLM or otherwise).


Clinical outcome of humans doing therapy would be better if for some reason doing therapy worse (less according to taught guidelines) was better. But, sure, we can wait for another research or follow up. It might be true. Therapy has dismal outcomes anyways and the outcomes are mostly independent of which theoretical framework the therapy is done according to. It might be the case that the only value in therapy is human connection that AI fails to simulate. But it seem that for some people it simulates connection pretty well.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: