Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I bet a lot of people would be happy if their GPU could train large models, even if it took 5x as long.


That's getting a multi-level DRAM config, see Grace Superchip for an example.

Or just a plain slower GPU: Apple's GPU series, but that's at very much higher price tags than desktop GPUs at a given perf level.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: