Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I saw somewhere that 95% of all ML inference tasks is still done on CPU


It's true that inference is still very often done on CPU, or even on microcontrollers. In our view, this is in large part because many applications lack good options for inference accelerator hardware. This is what we aim to change!


So, in your opinion, why would those CPU users want to migrate to an FPGA and your software rather than to Nvidia T4 or Tegra and CUDA?


It depends on the application. For some use cases, moving to a GPU makes total sense. However, if you have power constraints, form factor constraints, performance constraints or simply want to be in control of your own hardware, using an FPGA with Tensil may be a better option.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: