Hacker News new | past | comments | ask | show | jobs | submit login

I’ve found it to be pretty terrible compared to CUDA, especially with Huggingface transformers. There’s no technical reason why it has to be terrible there though. Apple should fix that.



Yeah. It’s good with YOLO and Dino though. My M2 Max can compute Dino embeddings faster than a T4 (which is the GPU in AWS’s g4dn instance type).


MLX will probably be even faster than that, if the model is already ported. Faster startup time too. That’s my main pet peeve though: there’s no technical reason why PyTorch couldn’t be just as good. It’s just underfunding and neglect


t4's are like 6 years old




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: