Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the major issue with asking LLMs (CGPT, etc.) for advice on various subjects is that they are typically 80-90% accurate. YMMV, speaking anecdotally here. Which means that the chance of them being wrong becomes an afterthought. You know there's a chance of that, but not bothering to verify the answer leads to an efficiency that rarely bites you. And if you stop verifying the answers, incorrect ones may go unnoticed, further obscuring the risk of that practice.

It's a hard thing to solve. I wouldn't expect LLM providers to care because that's how our (current) society works, and I wouldn't expect users to know better because that's how most humans operate.

If anyone has a good idea for this, I'm open to suggestions.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: