Good question. It doesn't say! The PDF has absolutely no detail on what it actually does. Googling the instruction brings up people updating CPUID fields, but nothing to do with actually using it.
I'm guessing it means you can choose the precision (word length) of the computations, eg 8 bits, 16 bits, 32 bits, etc. This probably only applies to integer ops, but if they could let us pack FP16 operands into 512 bit vectors, that would be awesome!
I hope they're going even farther and considering minifloats. 8-bit integers are easy but limiting and unexciting. 8-bit floats are exactly what some machine learning needs.
If you're updating your weights enough, precision can just be wasted computation. A 512-bit-wide computation could be updating 64 very coarse-grained weights at a time.