Can you share some writeup or repo for your project?
We internally have an auto-migration tool (currently very primitive) that can learn from cloud hosted model and cache it locally on the device (that's what this infra & packaging was built for ;)).
Though we would be pushing it the boundary with 3+ DNNs on mobile, adding speech2text and text2speech to your app would make it an interesting addition to Seeing-AI, IMO.
That depends on your choice of training framework and runtime engine (TF Lite/TF Mobile/Numericcal/Caffe2/etc.). We wrapped multiple runtime engines to provide as much flexibility as possible, quickly. For example, some layers available in TF Mobile are not available in TF Lite (and TF Lite is slower, for now!).
That being said, if there is no converter between the training format and runtime format, you're out of luck (for now). Our runtime (targeted primarily at Qualcomm SnapDragon SoCs) supports the most common layers and some more exotic ones that our users needed. For other engines we simply pull the standard package that vendors provide.
We internally have an auto-migration tool (currently very primitive) that can learn from cloud hosted model and cache it locally on the device (that's what this infra & packaging was built for ;)).