I don't think MobileNetV2 is designed to train on GPUs - according to this https://azure.microsoft.com/en-us/blog/gpus-vs-cpus-for-depl... MobileNetV2 gets bigger gains from GPUs vs several CPUs than ResNet. You could argue the batch size doesn't fully use the V100 but these comparisons are tricky and this looks like fairly normal training to me.
It's pretty surprising to me that an M1 performs anywhere near a V100 on model training and I guess the most striking thing is the energy efficiency of the M1.
Depends on model size, but if the model is small enough that I actually do training on a PCIe board, I do. I partition an A100 in 8, and train 8 models at a time, or just use MPS on a V100 board. The bigger A100 boards can fit multiple of the same models that do fit in a single V100..
Also I tend to do this initially, when I am exploring the hyperparameter space, for which I tend to use smaller but more models.
I find that using big models initially is just a waste of time. You want to try many things as quickly as possible.
I found training multiple models on same GPU hit other bottlenecks (mainly memory capacity/bandwidth) fast. I tend to train one model per GPU and just scale the number of computers. Also, if nothing else, we tend to push the models to fit the GPU memory.
Memory became less of an issue for me with V100, and isn't really an issue with A100, at least when quickly iterating for newer models, when the sizes are still relatively small.
Could you say a little more? I think I understand the "scaler" and it's how I learned the scales and how I practice, but I'm curious what the pentanizer is suggesting to do. Picking a root and an interval and finding it all over the fretboard?
One of the coolest parts of teaching these classes is how awesome the people are that show up. The engineers that want to learn new things mid career are exactly the kind of people I want to work with and hang out with. I think there's a real opportunity for more classes like this.
I really appreciate the author's critical analysis of this correlation presented as "fact" by Radiolab and I love how Hacker News and other blogs take these types of scientific findings and dig in for the truth. I think the PNAS paper refutes the original conclusion pretty thoroughly - I wish the Nautilus author would just explain that.
I don't think we should dismiss effects just because they seem really large (as the Nautilus author claims) but I do think that it's incredibly irresponsible of Sapolsky and Radiolab to be uncritically citing a study that looks like it was debunked in 2011.
I also think it's strange that the author cites the SJDM paper which is much, much less convincing, claiming that it refutes the original experiment. It looks to me like that paper just shows that by simulating a non-random order of parole requests they can create data that looks like the original experiment.
I love that Hacker News posts these things and people go through and analyze the papers. No one outside of the specialized field could possibly have time to analyze all of these papers but they clearly have implications that matter for everyone. I wish that popular science shows would do a more thorough analysis of these results on their own.
It's pretty surprising to me that an M1 performs anywhere near a V100 on model training and I guess the most striking thing is the energy efficiency of the M1.