Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Leveraging ML Compute for Accelerated Training on Mac (machinelearning.apple.com)
87 points by jonbaer on Nov 19, 2020 | hide | past | favorite | 24 comments


For those who read the comments before the article, the initial batch of comments are burying the lede:

> The new tensorflow_macos fork of TensorFlow 2.4 leverages ML Compute to enable machine learning libraries to take full advantage of not only the CPU, but also the GPU in both M1- and Intel-powered Macs for dramatically faster training performance.

This isn't about being locked in to some Apple framework. TensorFlow for Mac is accelerated again. (As Gruber likes to say: finally!)


I mean, it's an impressive performance boost, but is anyone really training models on a Macbook or a Mac Mini? Not sure it's really useful to compare this against itself, the real benchmark is how it performs when compared to a dedicated NVidia GPU which is what most people are using in the real world.


Having an NVidia GPU, or any other specific manufacturer's item, should not be a gatekeeping requirement to learning about and even doing meaningful ML work. Just how owning any own particular computer brand should not be a requirement for doing meaningful general programming.


Toy models matter for research, education and democratic acces to ML and the computational thinking it enables


> is anyone really training models on a Macbook or a Mac Mini?

In an academic environment, absolutely. Scaling is the second problem to solve, but iterating on model architecture is the first.


This is running on any Metal enabled Mac, you don't need an M1 processor for that. In particular, current Mac Pro, or current iMac Pro, which (can) have beefy discrete GPUs.

I said for ages that Apple should be able to make strong inroads into the machine learning market. I think Nvidia should be very afraid of what Apple will hit them with over the course of this decade.


Nvidia's big market is in servers and big training projects that train 10s or 100s gb of data or more. Almost all of these environment the data are stored in a Hadoop store, or cloud storage or big sql servers. No one is going to hook up a mac to the network add it to part of the data processing pipeline. And cloud is the natural place where this happens. Until apple releases a dedicated gpu/neural network processor in pcie card, Nvidia has nothing to fear. Nvidia is more concerned about amd or other machine learning processor startup. Apple is going to keep apple silicon running only on apple personal devices because that is being used as a competitive advantage to apple devices.


You are describing the current state of affairs. Its funny how people always think nothing will change. I am sure you can read similar statements about how Apple can never outperform Intel in their Macs from about 5 years ago.

It is simple: machine learning is part of the future and here to stay. Apple will need to do that as well. What better way is there to build expertise in it than to build your own hardware for it? Especially given how far they have already come in that domain? It is laughable to think that machine learning domination is not one of Apple's goals.


>current iMac Pro, which (can) have beefy discrete GPUs.

Wouldn't call anything they're shipping there beefy, their top configuration seems about on par performance wise (more memory wise) with a GTX 1070Ti which is a mid range 2017 card.


Yeah, beefy might be overstating it, but I am pretty happy with the top configuration.


I train models for niche use cases where I haven't found any pre-trained models for. It's not great, but with PlaidML/Keras, I make use of my current MacBook Pro's GPU (AMD 5500M). I get a speedup of about 2x over using multi-processing with all cores using TensorFlow.


Another question that came to my mind, what about benchmarks against other ARM-based mobile CPUs? Because I think that then Apple is going to have a powerful edge against Android devices regarding things like device based inference and federated learning...


You might not train a modern model from scratch, but allowing Mac users to do fine-tuning or transfer learning could still make this incredibly useful and continue to make practical machine learning more accessible.


I would assume that this boost up would also unlock faster inference as well. With WASM, and quicker inferences could open up new ML use cases


I would love to know more about PyTorch, and also about external GPUs (Radeon, on Intel Macs).

I seem to remember PyTorch being mentioned somewhere, cannot find it now, but remain hopeful.

EGPUs are out of the picture for M1, but is there a chance one would soon work for ML on an Intel Mac? I'm guessing not, but would be a nice bonus.


Are they? Someone hooked up a Radeon eGPU to an M1 system and the device was identified. What’s missing is Radeon drivers compiled for M1.

Or do we expect that to be such a niche market that nobody will touch it before M2 with official eGPU support is released?


Does anyone have an educated guess how this compares to current consumer Nvidia GPUs? I was just about to order a maxed-out Nvidia system (ignoring memory for now).

Bonus question to anyone from Apple who might read this: Will Apple contribute to the PyTorch repo re M1 support?


I saw some initial reports on twitter saying it was about the same speed as a 1080 Ti https://twitter.com/spurpura/status/1329168059647488000


Holy crap, that's pretty much more than enough for trying out deep learning (unless you're doing NLP, then memory becomes a bigger problem)


The graphics performance is similar to a NVIDIA 1050. Not sure about the ML cores, those are specialised so maybe even better than that.


If Apple wants or cares about the scientific computing community, they can do a great job. If they can achieve this using an ARM SoC that evolved from a mobile platform, they could very well make a beefier version of it dedicated to ML or graphics intensive workloads. They could easily make enterprise versions of these to power Cloud GPUs that'll easily outperform AWS offerings.


it’s a great sign that someone at Apple cares about this; hopefully they eventually support PyTorch as well.

there are a lot of use cases (smaller networks, non-deep-learning numerical algorithms) where having acceleration on a laptop is really nice — it would add friction to go to a remote machine.

The old Nvidia Macbook Pro used to be great for this. Having something like that, but also it somehow miraculously runs cool and is light, would be pretty great.


Great news, building Tensorflow with GPU support for mac was a nightmare last time I did it.

Anyone knows if Anaconda works well on M1?

Just need Homebrew & Docker support for M1 now.


Nice one apple comparing against yourself. Does anyone know about any effort in the same direction for pytorch?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: