Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One surprising thing to me is that using model outputs to train other/smaller models is standard fare and seems to work quite well.

So it seems to be less about not training AI on its own outputs and more about curating some overall quality bar for the content, AI-generated or otherwise



Back in the early 2000s when I was doing email filtering using naive Bayes in my POPFile email filter one of the surprising results was that taken the output of the filter as correct and retraining on a message as if it had been labelled by a human worked well.


Were you thresholding the naïve Bayes score or doing soft distillation?


POPFile was doing something incredibly simple (if enabled). Imagine there are two classes of email (ham and spam) (POPFile was actually built to do classification for arbitrary categories but often used as a spam filter). When a message was received and classified its classification was assumed to be correct and the entire message was fed into the training as if the user had specifically told the program to train on it (which was only done when messages were incorrectly classified).

In the two class case the two classes (ham and spam) were so distinct that this had the effect of causing parameters that were essentially uniquely associated with each class to become more and more important to that class. But also, it caused the filter to pick up new parameters that were specific to each class (e.g. as spammers changed their trickery to evade the filters they would learn the new tricks).

There was a threshold involved. I had a cut off score so that only when the classifier was fairly "certain" if the message was ham or spam would it re-train on the message.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: