OpenAI doing this was just to get attention. Any funded entity could trivially reproduce their work. There is no way this was done out of any serious, principled fear of bad actors getting their hands on it.
But if they publish the pretrained model then not just funded entities can reproduce their work, but essentially any person that can type `pip install tensorflow` or whatever. That's pretty big reach difference. Although, probably only a few months timewise.
We will get better protections against deepfakes etc. at a much slower rate if we limit their public visibility. We need better counter-tools.
Human ingenuity will not be contained like this. I'm almost certain that somewhere between 10-100 people who saw the OpenAI censored release saw it as a challenge for them to recreate it on their own.
This is fine. Maybe this makes things significantly more chaotic in the short term. But we have to take the long view on this. Ten years from now this tech will be seen as a joke compared to whatever they will have. It's time to start preparing for that.
Ya, but 'pip install tensorflow' is about as hard as reproducing the work from their paper too. Anyone with a CS degree should be able to do it with a bit of effort. I agree that that is still a reach difference, but I think it's kind of negligible here.
I don't think so. They published a paper describing their methods. I've implemented techniques from papers like these before, it's not that hard. What they're doing doesn't seem especially complicated to me.