Hacker News new | past | comments | ask | show | jobs | submit login

It makes sense for desktops but not for devices with batteries. I think Apple should introduce a new device for $5-10k that has 400GB of VRAM that all Macs on the network use for ML.

If you're on battery, you don't want to do LLM inference on a laptop. Hell, you don't really want to do transcription inference for that long - but would be nice not to have to send it to a data center.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: