Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Due to our concerns about malicious applications of the technology, we are not releasing the trained model.

> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.

Excerpt from the recent OpenAI blogpost about GPT2 text models. It seems valid since giving the code or probably a web app can make anyone easily create malicious intent content online



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: