Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You are missing TPU and spot/preemptible pricing, which need to be considered when we are talking about training cost. The big one to me is the ability to consistently train on V100s with spot pricing, which was not possible a couple of years ago (there wasn't enough spare capacity). Also, the improvement in cloud bandwidth for DL-type instances has helped distributed training a lot.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: