Absolutely agree on this.
1. DeepSeek just inspired a LOT of startups to develop their own as you no longer need to be a tech-giant to compete on training.
2. Companies with sensitive info will now buy their own GPUs to run their models locally as the range of application increased (as it did with Llama3)
3. As with 2, new services that were prohibitive with ChatGPT API will spring up. It’s difficult to reliably rent GPUs for services even with enough money, so people will buy more GPUs to host it themselves.
I understand if new CPU/GPU can outperform Nvidia and DeepSeek was developed using another GPU, but this is not the case. Lower requirement for higher performance historically never reduced the need for computational capacity. There is very poor reasoning for this move other than purely “technical” (trading-wise) reasons.
I understand if new CPU/GPU can outperform Nvidia and DeepSeek was developed using another GPU, but this is not the case. Lower requirement for higher performance historically never reduced the need for computational capacity. There is very poor reasoning for this move other than purely “technical” (trading-wise) reasons.