Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Perhaps they want to include online discussions/commentaries about their paper in the training data without including the paper itself


Most online discussion doesn't contain the entire text. You can pick almost any sentence from such a document and it'll be completely unique on the internet.

I was thinking it might be related to the difficulty of building a search engine over the huge training sets, but if you don't care about scaling or query performance it shouldn't be too hard to set one up internally that's good enough for the job. Even sharded grep could work, or filters done at the time the dataset is loaded for model training.


Why use a search engine when you can use an LLM? ;)


Well, because the goal is to locate the exact documents in the training set and remove them, not answer a question...


So you stream the training set through the context window of the LLM, and ask it if it contains the requested document (also in the context window).

The advantage is that it can also detect variations of the document.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: