I'm currently building out some code that should go in production in the next week or two and simply because of this we are using LLM to prefill data and then have a human look over it.
For our use case the LLM prefilling the data is significantly faster but if it ever gets to the point of that not needing to happen it would take a task whichtakes about 3 hours ( now down to one hour ) and make it a task that takes 3 minutes.
Will LLMs ever get to the point where it is perfectly reliable ( or at least with an error margin low enough for our use case ), I don't think so.
I'm currently building out some code that should go in production in the next week or two and simply because of this we are using LLM to prefill data and then have a human look over it.
For our use case the LLM prefilling the data is significantly faster but if it ever gets to the point of that not needing to happen it would take a task whichtakes about 3 hours ( now down to one hour ) and make it a task that takes 3 minutes.
Will LLMs ever get to the point where it is perfectly reliable ( or at least with an error margin low enough for our use case ), I don't think so.
It does make for a very cheap accelerator though.