Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Indeed it's not the solution to LLM hallucinations—as far as I know nobody knows a solution to it.

But it is the solution to needing to run the model again and checking the format of the output to ensure that it conforms to you expectations.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: