Hacker Newsnew | past | comments | ask | show | jobs | submit | petesergeant's commentslogin

> where people’s obvious mental health issues

I think the kids would call this "getting one-shotted by AI"



> The buck (or the bubble) will stop somewhere.

Why?


Every bubble in history has popped

But if you disagree I have an NFT to sell you


Is internet a bubble? Has it stopped?

Were you working in 1999?

Bubbles don't stop forever, they pop and move back to equilibrium.


Right, like programming

> who ultimately buys the goods and services that companies produce?

You don’t need anyone to buy them if you already have all the capital. You sell good and services to make more capital, but if you’ve got enough capital to provide all your needs, you don’t need anyone buyers.


I’ve got it deployed in production for a dataset that changes infrequently and it works really well

For code, maybe? For documents, no, text embeddings are magical alien technology.

> The average person whose typical use case is asking ChatGPT how long they need to boil an egg for hasn't seen improvements for 18 months

I don’t think that’s true. I think both my mother and my mother-in-law would start to complain pretty quickly if they got pushed back to 4o. Change may have felt gradual, but I think that’s more a function of growing confidence in what they can expect the machine to do.

I also think “ask how long to boil an egg” is missing a lot here. Both use ChatGPT in place of Google for all sorts of shit these days, including plenty of stuff they shouldn’t (like: “will the city be doing garbage collection tomorrow?”). Both are pretty sharp women but neither is remotely technical.


I’ve heard “orders of magnitude” used more than once to mean 4-5 times

In binary 2x is one order of magnitude

exactly!

I've been wondering about this for quite a while now. Why does everybody automatically assume that I'm using the decimal system when saying "orders of magnitude"?!


I'd argue that 100% of all humans use the decimal system, most of the time. Maybe 1 to 5% of all humans use another system some of the time.

Anyway, there are 10 types of people, those who understand binary and those who don't.


Because, as xkcd 169 says, communicating badly and then actung smug when you're misunderstood is not cleverness. "Orders of magnitude" refers to a decimal system in the vast majority of uses (I must admit I have no concrete data on this, but I can find plenty of references to it being base-10 and only a suggestion that it could be sometihng else).

Unless you've explicitly stated that you mean something else, people have no reason to think that you mean something else.


> nothing about the industry's finances add up right now

Nothing about the industry’s finances, or about Anthropic and OpenAI’s finances?

I look at the list of providers on OpenRouter for open models, and I don’t believe all of them are losing money. FWIW Anthropic claims (iirc) that they don’t lose money on inference. So I don’t think the industry or the model of selling inference is what’s in trouble there.

I am much more skeptical of Anthropic and OpenAI’s business model of spending gigantic sums on generating proprietary models. Latest Claude and GPT are very very good, but not better enough than the competition to justify the cash spend. It feels unlikely that anyone is gonna “winner takes all” the market at this point. I don’t see how Anthropic or OpenAI’s business model survive as independent entities, or how current owners don’t take a gigantic haircut, other than by Sam Altman managing to do something insane like reverse acquiring Oracle.

EDIT: also feels like Musk has shown how shallow the moat is. With enough cash and access to exceptional engineers, you can magic a frontier model out of the ether, however much of a douche you are.


It's become rather clear from the local LLM communities catching up that there is no moat. Everyone is still just barely figuring out how this nifty data structures produce such a powerful emergent behavior, there isn't any truly secret sauce yet.

> local LLM communities catching up that there is no moat.

they use Chinese open LLMs, but Chinese companies have moat: training datasets and some non-opensource tech, and also salaried talents, which one would need serious investment for if decide to bootstrap competitive frontier model today.


I’d argue there’s a _bit_ of secret sauce here, but the question is if there’s enough to justify valuations of the prop-AI firms, and that seems unlikely.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: