Not just "an" LLM, but the LLM (or specifically its chain-of-thought variant) that was in the news for days and caused NVIDIA stocks to crash (well, I'd prefer calling it a natural corrective action by the market) and almost a panic in the US because Chinese and so on.
1) If it's actually much cheaper to build a new model the position of OpenAI, et al is weaker because the barrier against new competitors is much lower than expected.
2) If you don't need a billion dollars in GPUs to build a model NVIDIA maybe isn't worth as much either because you don't need as much compute power to get to the same output. (IMO their valuation is still insanely high but that's a different story)
I don't think the drop was entirely irrational they showed there were places a lot of efficiency could be gained in the training process that people didn't seem to be working on in the existing players.