Anyone claiming that accuracy of AI models WILL improve is either unaware of how they really work or is a snake oil salesman.
Forget about a model that knows EVERYTHING. Let's just train a model that only is expert in not all the law of United states just one state and not even that, just understands FULLY the tax law of just one state to the extent that whatever documents you throw at it, it beats a tax consultancy firm every single time.
If even that were possible, OpenAI et.el would be playing this game differently.
Those use cases are never sold as "Mobile apps", but rather as "enterprise solutions", that cost the equivalent of several employees.
An employee can be held accountable, and fired easily. An AI? You'll have to talk to the Account Manager, and sit through their attempts to 'retain' you.
This is one of those "perfect is the enemy of good" situations. Sure, for things where you have a legal responsibility to get things perfectly right using an LLM as the full solution is probably a bad idea (although lots of accountants are using them to speed up processes already, they just check outputs). That isn't the case for 99% of task though. Something that's mostly accurate is good. People are happy with that, and they will buy it.
Forget about a model that knows EVERYTHING. Let's just train a model that only is expert in not all the law of United states just one state and not even that, just understands FULLY the tax law of just one state to the extent that whatever documents you throw at it, it beats a tax consultancy firm every single time.
If even that were possible, OpenAI et.el would be playing this game differently.