Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be interesting if there was a WikiLeaks-type of organization that facilitates safely leaking large models from big corporations.

Not sure how that would play out for accelerationism and existential risk, but I certainly don't trust the current powers that be.



Open sourcing is widely recognized to be a bad thing when it comes to AI existential risk. (For the same reason you don't want simple instructions for how to build bio weapons posted to the internet.)

Modern AI is pretty harmless though, so it doesn't matter yet.


> Modern AI is pretty harmless though, so it doesn't matter yet.

Yes, that's why the only thing people flipping out about "safety" of making them public achieve is making public distrustful about AI safety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: