Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> With what level of accuracy? And what guarantee of correctness? Because a report that happens to get the joins wrong once every 1000 reports is going to lead to fun legal problems.

The truth is everyone knows LLMs can't tell correct from error, can't tell real from imagined, and cannot care.

The word "hallucinate" has been used to explain when an LLM gets things wrong, when it's equally applicable to when it gets things right.

Everyone thinks the hallucinations can be trained out, leaving only edge cases. But in reality, edge cases are often horror stories. And an LLM edge case isn't a known quantity for which, say, limits, tolerances and test suites can really do the job. Because there's nobody with domain skill saying, look, this is safe or viable within these limits.

All LLM products are built with the same intention: we can use this to replace real people or expertise that is expensive to develop, or sell it to companies on that basis.

If it goes wrong, they know the excited customer will invest an unbillable amount of time re-training the LLM or double-checking its output -- developing a new unnecessary, tangential skill or still spending time doing what the LLM was meant to replace.

But hopefully you only need a handful of such babysitters, right? And if it goes really wrong there are disclaimers and legal departments.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: