I will never have full trust in an assertion unless (a) it's included in a contract that binds all parties, (b) the same contract includes a penalty for breaking the assertion that's severe enough to discourage it, and (c) I know the financial and other costs of litigation won't be severe for me.
In short, unless my large employer will likely win in punishing OpenAI should they break a promise, that promise is just aspirational marketing speak.
For data retention and usage, I'd also need a similar contractual agreement to tie the hands of any company that would acquire them in the future.
Copilot for individuals stores code snippets by default according to their TOS. Sure, you can probably find a way to opt out of that somewhere as well, but you'd have to read the TOS for every plugin and service you use, find the opt-out links and make sure you don't opt-in again via some other route (such as not Copilot but ChatGPT proper or some other Github, VSCode or some other plugin or service button or knob).
From a GDPR or commercial confidentiality perspective, it doesn't matter what OpenAI say they'll do with your data, you can't share it with them.
Let's say your doctor enters sensitive info about you, and despite having told OpenAI not to train data with it, they use it anyway due to a bug. A year from now, ChatGPT is generating personal information tells everyone and anyone about your sensitive info.
For most purposes that seems to be sufficient doesn't it? Or are there reasons not to trust OpenAi on this one?