We can see that today, the MI300X can already run inference for some open source LLMs: https://www.youtube.com/watch?v=rYVPDQfRcL0
There are almost certainly algorithmic optimizations still available. LLM-scale computing inside of consumer robots should be achievable by the end of the decade. In fact electric cars are probably the best "housing" for this sort of hardware.
We can see that today, the MI300X can already run inference for some open source LLMs: https://www.youtube.com/watch?v=rYVPDQfRcL0
There are almost certainly algorithmic optimizations still available. LLM-scale computing inside of consumer robots should be achievable by the end of the decade. In fact electric cars are probably the best "housing" for this sort of hardware.