My concern is when you've implemented it and rely on it and then the company providing the service pulls a "Google" and deprecates the project. I haven't seen that mentioned in any of the comments.
This is the same issue that would happen with any closed source software, and why the push for fully open source or at least "semi" open source models (which provide the weights but not the training data). If you are critically dependent on software that can just disappear for reasons out of your control, you're beholden to its developers.
On the flip side I feel that this is the big problem with monetizing AI. AI is already bad enough in that its output is practically always untrusted output, so any customer-facing application of AI requires a second AI to make sure the first AI didn't output anything improper to consumers or even children (because parents are okay with an app that just tells their children randomly generated text, apparently).
It would be a little better if you had control over the model. But without a intellectual property rights over the model, what is the AI company even selling? A GUI? So anyone can just copy and paste the model, skip all the training costs, and just sell a react frontend for the model trained for billions of dollars?
It feels like you can't make money from training the AI in a way that makes sense for customers, but you can make money from selling the AI that someone else trained by selling access to it for non-technical users.