Hacker News new | past | comments | ask | show | jobs | submit login

Deepseek should cause Nvidia and TSMC stocks to go up, not down. I'm buying more Nvidia and TSMC today.

1. More efficient LLMs should lead to more usage, which means more AI chip demand. Jevon's Paradox.

2. To build a moat, OpenAI and American AI companies need to up their datacenter spending even more.

3. DeepSeek's breakthrough is in distilling models. You still need a ton of compute to train the foundational model to distill.

4. DeepSeek's conclusion in their paper says more compute is needed for next break through.

5. DeepSeek's model is trained on GPT4o/Sonnet outputs. Again, this reaffirms the fact that in order to take the next step, you need to continue to train better models. Better models will generate better data for next-gen models.

I think DeepSeek hurts OpenAI/Anthropic/Google/Microsoft. I think DeepSeek helps TSMC/Nvidia.




That’s a rational stance. However, the Buffet Indicator is flashing red, we’re in a dotcom-era sized bubble, and it only takes a little bit of ill-founded worry to kick off a serious panic.


But why would an AI breakthrough cause stocks to go down? It should cause it to go up. It means we should expect even more breakthroughs, better models, more LLM usage, etc.


Bull runs don’t require a rational basis, neither do panics.


you're looking at it from economic theory not from stock market. NVIDIA's insane valuation right now was based on an almost exponential increase in demand for more and more of it. It's priced in that NVIDIA will continue that trend. DeepSeek proves that trajectory is no longer needed (not that it was ever cemented in rationalism), so anything less than the continued exponential growth would send stock down.


Jevon’s paradox suggests even more AI chips will be demanded after DeepSeek’s breakthrough.


Ugh. No, we can now run a decent model through CPU. Not an expensive video card.

Just try out the standard deepseek-r1 or even the deepseek-r1:1.5B through ollama.

No need for expensive hardware anymore locally. My PC ( without Nvidia card/expensive hardware) runs the a deepseek 1.5 b query fast enough - 2 - 9 seconds until it's finished.


The world isn't satisfied with a "decent model". The world is trying to reach AGI.

Further more, reasoning models require more tokens. The faster the GPU, the more thinking it can do in a set amount of time. This means the faster the hardware, the smarter the model output. Again, that reinforces the need for faster hardware.

More thinking = smarter models

Faster hardware = more thinking

Therefore, faster hardware = smarter models


I would shoot myself if I had to wait 9 seconds for a query. I sometimes even would like to kill by browser taking seconds to open a page...


2-4 seconds without streaming on a non GPU machine




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: