For those who read the comments before the article, the initial batch of comments are burying the lede:
> The new tensorflow_macos fork of TensorFlow 2.4 leverages ML Compute to enable machine learning libraries to take full advantage of not only the CPU, but also the GPU in both M1- and Intel-powered Macs for dramatically faster training performance.
This isn't about being locked in to some Apple framework. TensorFlow for Mac is accelerated again. (As Gruber likes to say: finally!)
I mean, it's an impressive performance boost, but is anyone really training models on a Macbook or a Mac Mini? Not sure it's really useful to compare this against itself, the real benchmark is how it performs when compared to a dedicated NVidia GPU which is what most people are using in the real world.
Having an NVidia GPU, or any other specific manufacturer's item, should not be a gatekeeping requirement to learning about and even doing meaningful ML work. Just how owning any own particular computer brand should not be a requirement for doing meaningful general programming.
This is running on any Metal enabled Mac, you don't need an M1 processor for that. In particular, current Mac Pro, or current iMac Pro, which (can) have beefy discrete GPUs.
I said for ages that Apple should be able to make strong inroads into the machine learning market. I think Nvidia should be very afraid of what Apple will hit them with over the course of this decade.
Nvidia's big market is in servers and big training projects that train 10s or 100s gb of data or more. Almost all of these environment the data are stored in a Hadoop store, or cloud storage or big sql servers. No one is going to hook up a mac to the network add it to part of the data processing pipeline. And cloud is the natural place where this happens. Until apple releases a dedicated gpu/neural network processor in pcie card, Nvidia has nothing to fear. Nvidia is more concerned about amd or other machine learning processor startup. Apple is going to keep apple silicon running only on apple personal devices because that is being used as a competitive advantage to apple devices.
You are describing the current state of affairs. Its funny how people always think nothing will change. I am sure you can read similar statements about how Apple can never outperform Intel in their Macs from about 5 years ago.
It is simple: machine learning is part of the future and here to stay. Apple will need to do that as well. What better way is there to build expertise in it than to build your own hardware for it? Especially given how far they have already come in that domain? It is laughable to think that machine learning domination is not one of Apple's goals.
>current iMac Pro, which (can) have beefy discrete GPUs.
Wouldn't call anything they're shipping there beefy, their top configuration seems about on par performance wise (more memory wise) with a GTX 1070Ti which is a mid range 2017 card.
I train models for niche use cases where I haven't found any pre-trained models for. It's not great, but with PlaidML/Keras, I make use of my current MacBook Pro's GPU (AMD 5500M). I get a speedup of about 2x over using multi-processing with all cores using TensorFlow.
Another question that came to my mind, what about benchmarks against other ARM-based mobile CPUs? Because I think that then Apple is going to have a powerful edge against Android devices regarding things like device based inference and federated learning...
You might not train a modern model from scratch, but allowing Mac users to do fine-tuning or transfer learning could still make this incredibly useful and continue to make practical machine learning more accessible.
Does anyone have an educated guess how this compares to current consumer Nvidia GPUs? I was just about to order a maxed-out Nvidia system (ignoring memory for now).
Bonus question to anyone from Apple who might read this: Will Apple contribute to the PyTorch repo re M1 support?
If Apple wants or cares about the scientific computing community, they can do a great job. If they can achieve this using an ARM SoC that evolved from a mobile platform, they could very well make a beefier version of it dedicated to ML or graphics intensive workloads. They could easily make enterprise versions of these to power Cloud GPUs that'll easily outperform AWS offerings.
it’s a great sign that someone at Apple cares about this; hopefully they eventually support PyTorch as well.
there are a lot of use cases (smaller networks, non-deep-learning numerical algorithms) where having acceleration on a laptop is really nice — it would add friction to go to a remote machine.
The old Nvidia Macbook Pro used to be great for this. Having something like that, but also it somehow miraculously runs cool and is light, would be pretty great.
> The new tensorflow_macos fork of TensorFlow 2.4 leverages ML Compute to enable machine learning libraries to take full advantage of not only the CPU, but also the GPU in both M1- and Intel-powered Macs for dramatically faster training performance.
This isn't about being locked in to some Apple framework. TensorFlow for Mac is accelerated again. (As Gruber likes to say: finally!)