Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's just a word meaning AI gives wrong answer.

No, it’s more specific than just wrong.

Hallucination is when a model creates a bit of fictitious knowledge, and uses that knowledge to answer a question.



Can you give an example of a "wrong" answer vs an "hallucinated" answer?


The issue is there is no difference between a right answer and a hallucinated answer.


there are many types of a wrong answer, and the difference is based on how the answer came to be. In case of BS/Hallucination there is no reason or logic behind the answer it is basically, in the case of LLM, just random text. There was no reasoning behind the output or it wasn't based on facts.

You can argue if it matters how a wrong answer came about ofc but there is a difference


Wrong is code that doesn’t compile. Hallucinated is compilable code using a library that never existed.


Can code using a library that doesn't exist compile? I admit ignorance here.


No it can't, I should have said code that has valid syntax, but are using APIs or libraries that don't exist.


It doesn't need to create wrong answers. It's enough to recall people who gave wrong answers.


I've heard the term originated in image recognition, where models would "see" things that weren't there.

You can still get that with zero bad labels in a supervised training set.

Multiple causes for the same behaviour makes progress easier, but knowing if it's fully solved harder.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: