I'm not sure I see the bear argument for NVidia here. Huge AI models certainly drive NVidia sales, but huge AI models are also widely thought to be untrainable and nearly un-runnable save for large datacenters.
To me, this is ripe for an application of the Jevons paradox. If architectural improvements make similar models cheaper, I would expect to see more of them trained and deployed, not fewer, ultimately increasing the market for GPU-like hardware.
To me, this is ripe for an application of the Jevons paradox. If architectural improvements make similar models cheaper, I would expect to see more of them trained and deployed, not fewer, ultimately increasing the market for GPU-like hardware.