Hacker Newsnew | past | comments | ask | show | jobs | submit | Turfie's commentslogin

Well, they might have gotten a little wary from previous boom and bust cycles. Perhaps they are a bit wary about the economic sustainability of the whole AI thing. However, perhaps they also might be driven by greed at this point. Why not just constrain supply and increase margins whilst they are no real competitor?


I've been designing chips since 1997. The first 2 companies I was at had their own fabs. It's been a boom and bust industry for 50 years or more.

https://www.macrobusiness.com.au/2021/05/the-great-semicondu...

Here is a long article from last year about Sam Altman.

https://www.nytimes.com/2024/09/25/business/openai-plan-elec...

https://finance.yahoo.com/news/tsmc-rejects-podcasting-bro-s...

> TSMC’s leadership dismissed Altman as a “podcasting bro” and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.

I thought it was ridiculous when I read it. I'm glad the fabs think he's crazy too. If he wants this then he can give them the money up front. But of course he doesn't have it.

After the dot com collapse my company's fabs were running at 50% capacity for a few years and losing money. In 2014 IBM paid Global Foundries $1.5 billion to take the fabs away. They didn't sell the fabs, they paid someone to take them away. The people who run TSMC are smart and don't want to invest $20-100 billion in new fabs that come online in 3-5 years just as the AI bubble bursts and demand collapses.

https://gf.com/gf-press-release/globalfoundries-acquire-ibms...


Thanks for the insights. The 'podcasting bro' bit is hilarious.

I don't think demand will collapse though, since the Mag7 has the cash flow to spend, and they can monetize if the time's ripe.

What do you think?


I started working during the dot com boom. I was getting 3 phone calls a week from recruiters on my work telephone number. Then I saw the bubble burst in from mid-2000. In 2001 zero recruiters called me. I hated my job after the reorg and it took me 10 months to find a new one.

I know a lot of people in the 45+ age range including many working on AI accelerators. We all think this is a bubble. The AI companies are not profitable right now for the prices they charge. There are a bunch of articles on this. If they raise prices too quickly to become profitable then demand will collapse. Eventually investors will want a return on their investment. I made a joke that we haven't reached the Pets.com phase of the bubble yet.


I've thought about this too. I do agree that open source models look good and enticing, especially from a privacy standpoint. But these solutions are always going to remain niche solutions for power users. I'm not one of them. I can't be hassled/bothered to setup that whole thing (local or cloud) to gain some privacy and end up with an inferior model and tool. Let's not forget about the cost as well! Right now I'm paying for Claude and Gemini. I run out of Claude tokens real fast, but I can just keep on going using Gemini/GeminiCLI for absolutely no cost it seems like.

The closed LLMs with the biggest amount of users will eventually outperform the open ones too, I believe. They have a lot of closed data that they can train their next generation on. Especially the LLMs that the scientific community uses will be a lot more valuable (for everyone). So in terms of quality, the closed LLMs should eventually outperform the open ones, I believe, which is indeed worrisome.

I also felt anxious early december about the valuations, but, one thing remains certain. Compute is in heavy demand, regardless of which LLM people use. I can't go back to pre-AI. I want more and more and faster and faster AI. The whole world is moving that way it seems like. I'm invested into phsyical AI atm (chips, ram, ...) whose evaluations look decently cheap.


I think you should reconsider the idea that frontier models will be superior, for a couple reasons:

- LLMs have fixed limitations. The first one is training, the dataset you use. There's only so much information in the world and we've largely downloaded it all, so it can't get better there. Next you can do training on specific things to make it better at specific things, but that is by definition niche; and you can actually do that for free today with Google's Tensors in free Cloud products. Later people will pay for this, but the point is, it's ridiculously easy for anyone to fine-tune training, we don't need frontier companies for that. And finally, LLM improvements come by small tweaks to models that already come to open weights within a matter of months, often surpassing the frontier! All you have to do is sit on your ass for a couple months and you have a better open model. Why would anyone do this? Because once all models are extremely good (about 1 year from now) you won't need them to be better, they'll already do everything you need in 1-shot, so you can afford to sit and wait for open models. Then the only reason left to use frontier cloud is that they host a model; but other people do cloud-hosted models! Because it's a commodity! (And by the way, people like me are already pissed off at Anthropic because we're not allowed to use OAuth with 3rd party tools, which is complete bullshit. I won't use them on general principle now, they're a lock-in moat, and I don't need them) There will also be better, faster, more optimized open models, which everyone is going to use. For doing math you'll use one model, for intelligence you'll use a different model, for coding a different model, for health a different model, etc, and the reason is simple: it's faster, lower memory, and more accurate. Why do things 2x slower if you don't have to? Frontier model providers just don't provide this kind of flexibility, but the community does. Smart users will do more with less, and that means open.

On the hardware:

- Def it will continue to be investment-worthy, but be cautious. The growth simply isn't going to continue at pace, and the simple reason is we've already got enough hardware. They want more hardware so they can continue trying to "scale LLMs" the way they have with brute force. But soon the LLMs will plateau and the brute force method isn't going to net the kind of improvements that justify the cost. Demand for hardware is going to drop like a stone in 1-2 years; if they don't cease building/buying then, they risk devaluing it (supply/demand), but either way Nvidia won't be selling as much product so there goes their valuation. And RAM is eventually going to get cheaper, so even if demand goes up, spending is less. The other reason demand won't continue at pace is investors are already scared, so the taps are being tightened (I'm sure the "Megadeal" being put on-hold is the secret investment groups tightening their belts or trying to secure more favorable terms). I honestly can't say what the economic picture is going to look like, but I guarantee you Nvidia will fall from its storied heights back to normal earth, and other providers will fill the gap. I don't know who for certain, but AMD just makes sense, because they're already supported by most AI software the way Nvidia is (try to run open-source inference today, it's one of those two). Frontier and cloud providers have Tensors and other exotic hardware, which is great for them, but everyone else is gonna buy commodity chips. Watch for architectures with lower price and higher parts availability.


> There's only so much information in the world and we've largely downloaded it all, so it can't get better there.

What about all the input data into LLMs and the conversations we're having? That must be able to produce a better next gen model, no?

> it's ridiculously easy for anyone to fine-tune training, we don't need frontier companies for that.

Not for me. It'll take me days, and then I'm pretty sure it won't be better than Gemini 3 pro for my coding needs, especially in reasoning.

> For doing math you'll use one model, for intelligence you'll use a different model, for coding a different model, for health a different model, etc, and the reason is simple: it's faster, lower memory, and more accurate.

Why wouldn't e.g. Gemini just add a triage step? And are you sure it's that much easier to get a better model for math than the big ones?

I think you underestimate the friction this causes regular users by handpicking and/or training specific models, whilst the big vendors are good enough for their needs.


> What about all the input data into LLMs and the conversations we're having? That must be able to produce a better next gen model, no?

Better models are largely coming from training, tuning, and specific "techniques" discovered to do things like eliminate loops and hallucinations. Human inputs are a small portion of that; you'll notice that all models are getting better despite the fact that all these companies have different human inputs! A decent amount of the models' abilities come from properties like temperature/p-settings, which is basically introducing variable randomness. (these are now called "low" and "high" in frontier models) This can cause problems, but also increased capability, so the challenge isn't getting better input, it's better controlling randomness (sort of). Even coding models benefit from a small amount of this. But there is a lot more, so overall model improvements are not one thing, they are many things that are not novel. In fact, open models get novel techniques before the frontier does, it's been like that for a while.

> Not for me. It'll take me days, and then I'm pretty sure it won't be better than Gemini 3 pro for my coding needs, especially in reasoning.

If you don't want the improvements, that's up to you; I'm just saying the frontier has no advantage here, and if people want better than frontier, it's there for free.

> Why wouldn't e.g. Gemini just add a triage step? And are you sure it's that much easier to get a better model for math than the big ones?

They already do have triage steps, but despite that, they still create specific models for specific use-cases. Most people already choose Thinking by default for general queries, and coding models for coding. That will continue, but there will be more providers of more specific models that will outperform frontier models, for the simple fact that there's a million use-cases out there and lots of opportunity for startups/community to create a better tailored model for cheaper. And soon all our computers will be decent at doing AI locally, so why pay for frontier anyway? I can already AI-code locally on a 4 year old machine. Two years from now, there likley won't be a need for you to use a cloud service at all, because your local machine and a local model will be equivalent, private, and free.


Thank you. You have somewhat shifted my beliefs in a meaningful way.


I agree. Especially the whole Johny Ive and Altman's hype video in that coffee shop was absolutely disgusting. Oh how far their egos have been inflated, which leads to very bad decision making. Not to be trusted.


Oh could I get a link to that one ?




Just fantastic. Hadn't seen it. Thanks for sharing that!


Why are you anti Google?


Google collects and stores grotesque amounts of data about the public https://takeout.google.com


And OpenAI doesn't/wouldn't if they had the chance?


OpenAI absolutely would, but OpenAI can't.

Google can spy on everything: via its OS, its browser, its Youtube, its search engine, its ad network, its blog network, its maps app, its translation service, its fonts service, its 8.8.8.8, its Office suite, its captcha, its analytics service, and on and on and on...


Exactly my thoughts as well.


My brother convinced me to try a 1Password family account, since it would be cheaper. Ever since, the Chrome plugin takes forever to login. Sometimes up to 5-6 seconds. And it really annoys me that they have so many resources and money, and it's still this expensive for a very very basic application, and slow to boot.

I tried out passwords, and combined with Safari, it's an absolute godsend compared to 1Password. That does mean that I switched from Brave to Safari, and thus have YouTube ads, and so I'm now paying for YouTube haha


> Ever since, the Chrome plugin takes forever to login.

This isn't my experience since the recent update that shows up a mini-login panel when trying to sign in. The old experience that opened the desktop app first was fairly slow.


Just to rub it in your face :) (teasingly and with respect) I got Android/LastPass/Firefox and only pay for the LastPass annually (I got it on all my devices), so there you have it ;)


Just so you're aware, LastPass has had some pretty bad security issues, eg. on the latest https://arstechnica.com/information-technology/2022/12/lastp...


Just so you're aware, LastPass has had some pretty bad security issues, eg. on the latest https://arstechnica.com/information-technology/2022/12/lastp...


Check out the StopTheMadness extension. It offers a large number of features, one of which is automatically fast-forwarding YouTube ads

https://underpassapp.com/news/2023-10-19.html


Does it blank all the fake videos in your YouTube home page? There used to be ads separate. Then they started putting one in the upper left corner that pretended to be a real video, with some clickbait title. Now (today?) they have them sprinkled all over, like maybe 15% of all the thumbnails are now ads.

I'm leaving that platform. They've taken shittification to new heights.


I have never had such issues with the Chrome plugin.


It's never random. All is just a process of causes and conditions, albeit a very complex one. We just can't track all the variables.


At the root of medicine is biochemistry, and at the root of chemistry is quantum physics: the formation and breaking of chemical bonds is at core random events shaped by probabilities. We can only say how likely events are, not which ones will happen when.


Or rather even if it is deterministic, it might as well be random to us.


Sure, in the same sense that a roulette table or a deck of shuffled cards isn't actually random, it's just cause and effect.


It's random for all practical purposes if you can't figure out the causes.


This whole 'smart contracts' thing is stupid.

The issue is trust. Smart contracts as a solution are a sad attempt at solving that issue. There's always edge cases you can't code in.

E.g. in the example of the neighbour buying goods; What if the other neighbours don't really like this guy, and conspire against him?

The solution to trust, isn't codifying every possible variable and putting in on some silly unmodifiable chain. That solution just shifts the trust. Instead of trusting humans, we're now trusting some poorly written code...

The solution to the trust issue is good ethics, mindfulness and good upbringing. If someone violates your trust, it's probably due to some stress factor in that person's life. We need to create a better world for everyone and actually help the people that our violating our trust, and show them the way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: