Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Remember when OpenAi wrote this?

> Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights

Well I guess Meta doesn’t care.

https://openai.com/blog/better-language-models/



Ever since OpenAI transitioned away from the non-profit model, I'd take these statements with a grain if salt. Yes, there may also be some truth in that opinion, but don't underestimate monetary interests when someone has an easy ~12 month industry lead. Meta's existence and financial wellbeing on the other hand doesn't depend on this stuff, so they have less incentive to keep things proprietary. It seems ironic and almost bit sad that the new commercial circumstances have basically reversed these companies' original roles in AI research.


I feel the same way. It does seem odd, though, that Meta would release this despite the precedent set by OpenAI with statements like this. What does Meta gain by releasing this for download?


I hate the nanny point of view of OpenAI. IMO trashing Meta because theirs models may be misused isn't fair.

I think that hackers should advocate to have the freedom to toy/work with these models.


OpenAI released their large GPT-2 models weights a couple months after making that post: https://openai.com/blog/gpt-2-1-5b-release/


OpenAI is only concerned with making money. What you quote is the PR reason, so they don't sound like the empty corporate money-grubbers they actually are.


hint: openAI didn't care either




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: