Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One reason to believe OpenAI here is that R1 will occasionally claim to be made by OpenAI, which in e.g. LLaMA finetunes is indicative of using synthetic data generated by ChatGPT.

Note that this isn't necessarily o1. While o1 is specifically trained to do CoT, you can also make 4o etc produce it with the appropriate prompts, and then train on that output.



I suppose it might be hard to avoid encountering ChatGPT outputs "in the wild" now, even if they don't explicitly use it for training material.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: