Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> WARNING: Outputs may be unreliable! Language Models are prone to hallucinate text.

I’m not sure what the point of this is or of why they are making it public. What use is content about science if any part of it can be wrong? Creating fiction or marketing copy from these models is fine but surely this is an abomination.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: