Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do local GPUs make sense? For the same price, can't you got a full years worth of cloud gpu time?


Cloud GPU providers are running low on capacity at the moment as people frantically suck up capacity to hop on the AI bandwagon, raising worries about availability. So having guaranteed access is maybe one motivation for local GPUs. But for me the main reason to go local is more psychological. I've mostly used cloud compute up until now but whenever I'm paying an hourly cost (even a small one) there is a pressure to 'make it worthwhile' and I feel guilty when the GPU is sitting idle. This disincentivizes playing and experimentation, whereas when you can run things locally there is almost no friction for quickly trying something out.


Looking at the pricing, if you only spin those instances up when you need them, you can go a while before you break even. Otherwise it only takes a few months depending on the GPU.

I would imagine that someone really serious about training (or any other CUDA workload) uses both.


Having looked at the pricing of retail card vs cloud, I came to the conclusion I could probably buy enough cloud compute to complete a phd before I 'paid for' the cost of a 4090 build...


Buying a high-end gaming GPU also lets you do, well, high-end gaming, 3D and video renders, etc.

If you only care about ML stuff, sure, the calculation is different.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: