Hacker News new | past | comments | ask | show | jobs | submit login

Distilling means fine-tuning an existing model using outputs from the bigger model. The special technique is in the details of what you choose to generate from the bigger model, how long to train for, and a bunch of other nitty gritty stuff I don’t know about because I’m also not an ML engineer.

Google it!




> Distilling means fine-tuning an existing model using outputs from the bigger model.

Crucially, the output of the teacher model includes token probabilities so that the fine-tuning is trying to learn the entire output distribution.


That's possible only if they use the same tokens. Which likely requires they share the same tokenizer. Not sure that's the case here, R1 was built on OpenAI closed model's output.


That was an (as far as I can tell) unsubstantiated claim made by OpenAI. It doesn’t even make sense, as o1’s reasoning traces are not provided to the user.


One reason to believe OpenAI here is that R1 will occasionally claim to be made by OpenAI, which in e.g. LLaMA finetunes is indicative of using synthetic data generated by ChatGPT.

Note that this isn't necessarily o1. While o1 is specifically trained to do CoT, you can also make 4o etc produce it with the appropriate prompts, and then train on that output.


I suppose it might be hard to avoid encountering ChatGPT outputs "in the wild" now, even if they don't explicitly use it for training material.


> Google it!

Or you could provide some example links


This only makes sense if I have some great canonical explanation of distillation on hand. But it’s a simple concept. There are hundreds of identical explanations online.


Are all the 100 explanations good or would you recommend one of them?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: