Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you implying that an LLM needs to be trained on a specific piece of text to answer questions about it?




If you want proper answers, yes. If you want to rely on whatever reddit or tiktok says about the book, then I guess at that point you're fine with hallucinations and others doing the thinking for you anyway. Hence the issues brought up in the article.

I wouldn't trust an LLM for anything more than the most basic questions of it didn't actually have text to cite.


Luckily, the LLM has the text to cite, it can be passed in at inference time, which is legally distinct from training on the data.

Having access to the text and being trained on the text are two different things.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: