Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly! This is called "model parallelism" - each layer of the graph is spread across multiple compute devices. Large clusters like the V100s or the forthcoming trn1 instances (disclosure, I work on this team) need _stupid_ amounts of inter-device bandwidth, particularly for training.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: