you joke, but the hobbling of these 'safe' models is exactly what spurs development of the unsafe ones that are ran locally, anonymously, and for who knows what purpose.
someone really interested in control would want OpenAI or whatever centralized organization to be able to sift through the results for dangerous individuals -- part of this is making sure to stymie development of alternatives to that concept.
someone really interested in control would want OpenAI or whatever centralized organization to be able to sift through the results for dangerous individuals -- part of this is making sure to stymie development of alternatives to that concept.