There may be a market sweet spot between dedicated high-performance machine learning that's ideal for experts who are willing to buy/lease dedicated hardware and regular developers who want to use simpler machine learning toolkits to solve easier problems. Toolsets like Julia, NumPy, R, etc would take advantage of the new instruction and basically give your average a speed boost for simply having an Intel PC.
This reminds me when you had to buy crypto accelerator cards for your webservers (does anyone remember Soekris?)...
The market stalled after Intel started adding support for crypto operations on-die.
This looks like Intel is protecting his server chip business shooting at nVidia before they start selling Titan-X with other server chip (what chip - I don't know; but I bet there's a business plan on a spreadsheet somewhere).
I guess training neural nets will a GPUs / custom chips task for some time; but as soon as you have developed the classifier / predictor you have to run it on whatever hardware you have and that means you might need less parallel computing power and it doesn't make (financial) sense buying GPUs for that.
There may be a market sweet spot between dedicated high-performance machine learning that's ideal for experts who are willing to buy/lease dedicated hardware and regular developers who want to use simpler machine learning toolkits to solve easier problems. Toolsets like Julia, NumPy, R, etc would take advantage of the new instruction and basically give your average a speed boost for simply having an Intel PC.