It has? Where? Who uses ONNX for anything? I doubt it even can work, period. The moment you do anything other than a bare bones classifier (which nobody really runs on devices - you need more complex models to solve real world problems) you run into ops unsupported by your inference framework, and that's if ONNX is supported by its tooling in the first place. In fact you could also run into unsupported ops during export as well: that is, it is somewhat likely that you won't even be able to export your model unless it consists entirely of the ops ONNX standard implements. The rest can be exported as opaque ops, but your inference tooling will not know what to do with those for sure.
It can work for more than a bare bones classifier. It's certainly not painless and sometimes you need some manual work to translate your model but work it does ...
For mobile, the situation seems worse than a year ago. TF Mobile mostly worked. The current situation with TF lite is a joke .. tried a real-world Pytorch -> ONYX -> TF/TF lite and it has been weeks of misery.