Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anyone claiming that accuracy of AI models WILL improve is either unaware of how they really work or is a snake oil salesman.

Forget about a model that knows EVERYTHING. Let's just train a model that only is expert in not all the law of United states just one state and not even that, just understands FULLY the tax law of just one state to the extent that whatever documents you throw at it, it beats a tax consultancy firm every single time.

If even that were possible, OpenAI et.el would be playing this game differently.



Why does a mobile app needs to beat a highly trained professional every single time in order to be useful?

Is this standard applied to any other app?


Those use cases are never sold as "Mobile apps", but rather as "enterprise solutions", that cost the equivalent of several employees.

An employee can be held accountable, and fired easily. An AI? You'll have to talk to the Account Manager, and sit through their attempts to 'retain' you.


Because it's taxation. Financial well being is at stack. We're even looking at a potential jail time for tax fraud, tax evasion and what not.

My app is powered by GTPChatChat, the model beating all artificially curated benchmarks.

Still wanna buy?


This is one of those "perfect is the enemy of good" situations. Sure, for things where you have a legal responsibility to get things perfectly right using an LLM as the full solution is probably a bad idea (although lots of accountants are using them to speed up processes already, they just check outputs). That isn't the case for 99% of task though. Something that's mostly accurate is good. People are happy with that, and they will buy it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: