It’s an open problem in AI development to make sure LLM’s never say the “wrong” thing. No matter what, when dealing with a non-deterministic system, one can’t anticipate or oversee the moral shape of all its outputs. There are a lot of things however that you can’t get ChatGPT to say, and they often ban users after successive violations, so it isn’t true that they are fully abdicating responsibility for the use and outputs of their models in realms where the harm is tractable.