Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
bitexploder
4 days ago
|
parent
|
context
|
favorite
| on:
Google will let companies run Gemini models in the...
My limited understanding is that CUDA wins on smaller batches and jobs but TPU wins on larger jobs. It is just easier to use and better at typical small workloads. At some point for bigger ML loads and inference TPU starts making sense.
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: