Xilinx is certainly doing some very interesting stuff with Everest.
But to be honest I don't see where all the hype (and funding) for inference accelerators is coming from. It think that many people overestimate the difficulty of making an accelerator that does high performance 8 bit math - google has done it (TPUv1), tesla has done it (unnamed custom silicon), nvidia has done it (nvdla), Xilinx has done it, ...
I can't see inference accelerators becoming anything but commodity in a few years.
I think most of the excitement is probably that it isn't something strictly Intel / AMD can do.
Which is pretty disruptive in the semi space.
The last times we had problem spaces that weren't economic to add to CPU (graphics, machine learning / GPGPU), a whole lot of value was captured by different players.
But to be honest I don't see where all the hype (and funding) for inference accelerators is coming from. It think that many people overestimate the difficulty of making an accelerator that does high performance 8 bit math - google has done it (TPUv1), tesla has done it (unnamed custom silicon), nvidia has done it (nvdla), Xilinx has done it, ...
I can't see inference accelerators becoming anything but commodity in a few years.