Yeah all this "gotcha" stuff around AI is pretty ridiculous. But it's because there is a massive cognitive dissonance happening in society and technology right now. We want maximum freedom for ourselves to say, do, build, and think whatever we want, but we also want to massively restrict the ability of others to say, do, build, and think whatever they want. It's simply not possible to satisfy both of these needs in a way that feels internally satisfying, and the hysterical nature of internet discourse and the output of these new tools is a symptom of it.
I would personally like to have it working that way.
But I also understand that it wouldn't work for people who have the expectation that once a dangerous content is identified and removed from the internet, the models are re-trained immediately
I hope local-first models like Mistral will fix this. If you run it locally other people with their other expectations have little to say about your LLM.