Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For those of you who were confused about what "R1" is like I was, it seems to be an LLM of some kind https://api-docs.deepseek.com/news/news250120


Not just "an" LLM, but the LLM (or specifically its chain-of-thought variant) that was in the news for days and caused NVIDIA stocks to crash (well, I'd prefer calling it a natural corrective action by the market) and almost a panic in the US because Chinese and so on.


IMO the panic came from a few places:

1) If it's actually much cheaper to build a new model the position of OpenAI, et al is weaker because the barrier against new competitors is much lower than expected.

2) If you don't need a billion dollars in GPUs to build a model NVIDIA maybe isn't worth as much either because you don't need as much compute power to get to the same output. (IMO their valuation is still insanely high but that's a different story)

I don't think the drop was entirely irrational they showed there were places a lot of efficiency could be gained in the training process that people didn't seem to be working on in the existing players.


Fair enough. I'm not interested in this stuff




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: