Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This just seems like goalpost shifting to make it sound like these models are more capable than they are. Oh, it didn't "hallucinate" (a term which I think sucks because it anthropomorphizes the model), it just "fabricated a fact" or "made an error".

It doesn't matter what you call it, the output was wrong. And it's not like something new and different is going on here vs whatever your definition of a hallucination is: in both cases the model predicted the wrong sequence of tokens in response to the prompt.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: