So Nvidia is going to pretty much corner the market for a long time? This bit I expected but was still sad to read. Surely we would benefit from competition. It would probably take a lot of investment from AMD to make that happen, I imagine.
> AMD GPUs are great in terms of pure silicon: Great FP16 performance, great memory bandwidth. However, their lack of Tensor Cores or the equivalent makes their deep learning performance poor compared to NVIDIA GPUs. Packed low-precision math does not cut it. Without this hardware feature, AMD GPUs will never be competitive.
This has pretty much always been true. AMD cards always had more FLOPS and ROPs and memory bandwidth than the competing nVidia cards which benchmark the same. Is that a pro for AMD? Uhhhh doesn't really sound like it.
That’s the one thing that I feel is a bit misleading in the article (to be fair, it was initially written years ago, and got rewritten a bit recently). FLOPS comparisons given in the wild are not always apple-to-apple (eg. not including Tensor cores for NVIDIA, but including V_DUAL_DOT2ACC_F32_F16 for AMD), while on the flip side, AMD’s WMMA should address the same goals as Tensor cores. I have an article on comparing the two: https://espadrine.github.io/blog/posts/recomputing-gpu-perfo...
> It would probably take a lot of investment from AMD to make that happen, I imagine
Don't AMD deliberately gimp their consumer cards to prevent cannibalising the pro cards? I vaguely recall reading about that a while back.
That being the case, they have already done the R&D but they chose to use the tech on the higher-margin kit, thus preventing hobbyists from buying AMD.
A few years ago AMD split off their GPU architectures to CDNA (focused on data center compute) and RDNA (focused on rendering for gaming and workstations). This in itself is fine and what Nvidia was already doing, it makes sense to optimize silicon for each use case, but where AMD took a massive wrong turn is that they decided to stop supporting compute completely for their RDNA (and all legacy) cards.
I'm not sure exactly what AMD expected to happen when doing that, especially when Nvidia continues to support CUDA on basically every GPU they've ever made: https://developer.nvidia.com/cuda-gpus#compute (looks like back to a GeForce 9400 GT, released in 2008)
Sadly this is still a market segment in which a proprietary stack dominates. From the perspective of AMD, they could be looking at a situation in which they can either throw billions of dollars at a monopoly protected by intellectual property law, and probably fail, or take a Pareto principle approach and cover their usual niche.
> AMD GPUs are great in terms of pure silicon: Great FP16 performance, great memory bandwidth. However, their lack of Tensor Cores or the equivalent makes their deep learning performance poor compared to NVIDIA GPUs. Packed low-precision math does not cut it. Without this hardware feature, AMD GPUs will never be competitive.
Edit: what about Intel arc GPU? Any hope there?