Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Is anyone really cocksure on the basis of LLM received knowledge?

I work for a company with an open source product and the number of support requests we get from people who ask the chatbot to do their config and then end up with something nonfunctioning is quite significant. Goes up to users complaining our api is down because the chatbot hallucinated the endpoint.



LLMs do love to make up endpoints and parameters, but I have found that ones with web access are pretty good at copy/pasting configs if they can find them, so it might be worth a few minutes of exploring what people are actually finding that's causing it to make up an endpoint. I have not (yet!) seen an instance where making something easier for LLMs to parse didn't also help human comprehension.


I work in DevSecOps, and devs sometimes come to us with AI-slop summaries and writeups about our own tooling. Any time I see emojis in a message, I know I'm about to have a laugh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: