Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is troubling in the context of OpenAI deciding not to release their code and dataset for fear of it being put to bad use. It's a tricky topic but I get nervous at the idea of research being censored.


That was very strange for me as well, as the original stated goal of OpenAI was to decentralize power. It looks like instead of that they just want to be one more of the few powerful entities.


It's possible they would be willing to share with legitimate researchers who ask. Putting it out there for anyone to download is not the only way to do it.


You're interpreting it totally backwards.

If they thing OpenAI made isn't interesting enough of a discovery without its data (because it's all arbitrary anyway), but is very useful to spammers as a piece of code, OpenAI has truly achieved the 200% opposite goals they were looking for.

I mean the Faceswap people have the same problem. They could give less a shit about porn. But that's what people used it for.


Are you surprised? It's too tempting.


I thought OpenAI's goal was to make sure that the first Strong General AI was also a SAFE AI, to the point that they've said that if it looks like they're not going to win that race they'll work for the leader instead? Under the theory that if it's a race at the end then corners will be cut that might doom us all, while them throwing their weight behind the leaders would allow enough margin over the #3 AI development program to go slow and get safety right.


[flagged]


70s? Was it ever different?


[flagged]



Sure, he left OpenAI's board a year ago and he's no longer chairman of Tesla. That doesn't especially change things in practice. Could his lawyers have suggested leaving the OpenAI board due to perceived conflict of interest and problems with his association with other public companies? And can people from Musk's companies then still share information with Open AI? Absolutely.


I like your bubble...


OpenAI doing this was just to get attention. Any funded entity could trivially reproduce their work. There is no way this was done out of any serious, principled fear of bad actors getting their hands on it.


But if they publish the pretrained model then not just funded entities can reproduce their work, but essentially any person that can type `pip install tensorflow` or whatever. That's pretty big reach difference. Although, probably only a few months timewise.


We will get better protections against deepfakes etc. at a much slower rate if we limit their public visibility. We need better counter-tools.

Human ingenuity will not be contained like this. I'm almost certain that somewhere between 10-100 people who saw the OpenAI censored release saw it as a challenge for them to recreate it on their own.

This is fine. Maybe this makes things significantly more chaotic in the short term. But we have to take the long view on this. Ten years from now this tech will be seen as a joke compared to whatever they will have. It's time to start preparing for that.


Ya, but 'pip install tensorflow' is about as hard as reproducing the work from their paper too. Anyone with a CS degree should be able to do it with a bit of effort. I agree that that is still a reach difference, but I think it's kind of negligible here.


I think you are vastly underestimating the difficulty of achieving those result.


I don't think so. They published a paper describing their methods. I've implemented techniques from papers like these before, it's not that hard. What they're doing doesn't seem especially complicated to me.


As a grad student, my trust that they actually did what they said they did is 0. If you don’t publish your source code without a damned good reason (i.e. your legal department says you are not allowed to), your publication is near worthless in non-theory CS since it is very likely to be unreproducible and probably has bugs which render the conclusions invalid.


Most start with good intentions. After realizing the power/advantage they have, whether it be from advanced technology, political office, or other position, they become jealous of it and find moral justification for clinging onto it despite conflict with their original intention.

There's probably a word for this sentiment that I'm not aware of.


especially for an organization calling itself "Open"something.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: