It makes sense for desktops but not for devices with batteries. I think Apple should introduce a new device for $5-10k that has 400GB of VRAM that all Macs on the network use for ML.
If you're on battery, you don't want to do LLM inference on a laptop. Hell, you don't really want to do transcription inference for that long - but would be nice not to have to send it to a data center.
If you're on battery, you don't want to do LLM inference on a laptop. Hell, you don't really want to do transcription inference for that long - but would be nice not to have to send it to a data center.