So Apple would need 16x its GPU Core, or 128 GPU Core to reach Nvidia 3090 Desktop Performance. Or roughly 480mm2 Die Size, 192W TDP excluding memory controller and interconnect.
Doesn't look too bad for Nvidia, especially when you consider 3090 is still on Samsung 8nm, which is equivalent to TSMC 10nm, compared to 5nm on Apple M1.
If Apple could just scale up their GPU and trounce a 430B market cap competitor's premiere product at 1/2 the power, 60% of the die size, that actually looks pretty bad for nvidia, doesn't it? Scaling is more difficult than that, and who knows if they could so easily, but who thought Apple would render both Intel and nvidia irrelevant?
Regardless, Apple's threat to vendors like that is their complete vertical integration. Ran some of the new object capture code (photogrammetry) on my M1 Mac yesterday and in no time at all the 11 trillion op neural engine blasted through and generated a remarkable model. We've seen Apple respond to discovered performance needs by plugging in a matrix engine, a neural engine, and scaling appropriately, dedicating cores and silicon to the greatest needs. They are in a very unique position relative to someone like nvidia who effectively throws something over a fence.
That's besides the point though. You can't pick and choose which workloads you're going to run on Apple Silicon, because the ultimate goal is that it will be able to compete with with the rest of the industry in raw performance metrics, which is simply not the case right now. My M1 Mac's GPU still loses in several benchmarks against my 7-year-old 1060. If Apple wants to lure people like me into their pro segment, they're going to need to scale aggressively: something that ARM has notoriously had trouble with in the past.
Also, Apple desperately needs to support a real graphics API. Metal is a joke, and even the translation tools like MoltenVK, while impressive, still end up beholden to Apple's arbitrary limitations. If they don't end up supporting Vulkan on the M1, it's a moot point for me. You could have the most powerful GPU in the world, but I won't use it if it's bottlenecked by the shittiest modern graphics API.
"My M1 Mac's GPU still loses in several benchmarks against my 7-year-old 1060."
The GPU in the AS M1 is the fastest integrated graphics available in the mainstream computing market [1]. That is the competition, not a standalone, 120W GPU. Apple is purportedly now working on separating their GPU designs into a much larger heat and power profile (which contrary to some of the comments on here clearly isn't going to be for laptops, beyond an external TB4 enclosure) and it might just change things a bit.
Scaling a GPU is easier than scaling a CPU, by design. Apple's GPU has nothing to do with ARM.
And to your original point, yes, Apple does largely choose which workloads run on Apple Silicon and how. By controlling the APIs along with the silicon, Apple abstracts it to a degree that gives them enormous flexibility. The Accelerate and CoreML APIs are abstract vehicles that might use one or a thousand matrix engines, neural nets, or an array of GPUs. Apple has built a world where they have more hardware flexibility than anyone. And while close to no one is doing model training on Apple hardware right now, Apple has laid the foundation so a competitive piece of hardware could change that overnight.
[1] The SoC graphics of the chips in the PS5 and Xbox Series X have more powerful graphics, but the GPU chiplet alone on those systems uses a magnitude more power and more die than the entire M1 SoC. In another comment you mentioned that Zen 2 integrated graphics come close. They aren't within a ballfield, literally with 1/4 or worse the performance. In discussions like this unfortunately the boring "n years old / n process" trope is used to excess, yet again there are zero competitive integrated graphics on the market. None. Apple isn't a GPU company, yet here we are.
This won't scale like this, also for deep learning CUDA and CUDNN will be still probably 2-5x faster then AMD/Metal drivers as was proven before (in case of AMD shitty deep learning drivers)
This. Nvidia's CUDA is a genuine technical marvel, and Metal really has nothing that can compete with it. I also get the feeling that unless Apple bites the bullet and supports Vulkan, they won't actually get any good GPU libs.
This is more for the Mac Pro line. Not so much the laptops.
We’ve only see the Apple equivalent of the i3 with integrated graphics. It’s going to get interesting over the next several months as Apple unveils their middle and upper performance solutions.
I own an M1 Macbook Air. At its hottest, it's a fraction of the "normal" heat that my year old Intel based Macbook Pro runs at.
To get the M1 warm, I need to be charging it while also playing an Intel based game like the latest Subnautica, with the machine on my lap. Even then, it's not as hot as my Macbook Pro gets while unplugged and browsing the internet.
No. According to rumours: the high-end 64-128 GPUs will be exclusively for the Mac Pro and the MacBook Pro will have 16-32 cores GPUs instead. It think it's fair to assume that we won't see much changes with the cooling for their laptops.
Doesn't look too bad for Nvidia, especially when you consider 3090 is still on Samsung 8nm, which is equivalent to TSMC 10nm, compared to 5nm on Apple M1.