Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was incorrectly calculating based on 1 weight == 1 transistor which is totally wrong. This figure that you provided is more accurate.

We can see that today, the MI300X can already run inference for some open source LLMs: https://www.youtube.com/watch?v=rYVPDQfRcL0

There are almost certainly algorithmic optimizations still available. LLM-scale computing inside of consumer robots should be achievable by the end of the decade. In fact electric cars are probably the best "housing" for this sort of hardware.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: