>Due to our concerns about malicious applications of the technology, we are not releasing the trained model.
> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.
Excerpt from the recent OpenAI blogpost about GPT2 text models. It seems valid since giving the code or probably a web app can make anyone easily create malicious intent content online
> We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.
Excerpt from the recent OpenAI blogpost about GPT2 text models. It seems valid since giving the code or probably a web app can make anyone easily create malicious intent content online