Truthfully, you can no longer trust yourself (whichever side you're on in this debate). We're all now primed and we'll pick up any distinguishing characteristics. You'd have to listen to them in a blind test and do this with several clips that do not reveal which ones are OpenAI and which are from a movie or something else that spoils it.
And I wouldn't put the metric at 50/50, needs to be indistinguishable. It would be a reasonable amount where it sounds __like__, which could be identifying the chatbot 100% of the time! (e.g. what if I just had a roboticized version of a person's voice) Truth is that I can send you clips of the same person[0], tell you they're different people, and a good portion of people will be certain that these are different people (maybe __you're different__™, but that doesn't matter).
So use that as the litmus test in either way. Not if you think they are different, but rather "would a reasonable person think this is supposed to sound like ScarJo?" Not you, other people. Then, ask yourself if there was sufficient evidence that OpenAI either purposefully intended to clone her voice OR got so set in their ways (maybe after she declined, but had hyped themselves up) that they would have tricked themselves into only accepting a voice actor that ended up sounding similar. That last part is important because it shows how such a thing can happen without ever explicitly (and maybe even not recognizing themselves) stating such a requirement. Remember that us humans do a lot of subconscious processing (I have a whole other rant on people building AGI -- a field I'm in fwiw -- not spending enough time understanding their minds or the minds of animals).
Edit:
[0]I should add that there's a robustness issue here and is going to be a distinguishing factor for people determining if the voices are different. Without a doubt, those voices are "different" but the question is in what way. The same way someone's voice might change day to day? The difference similar to how someone sounds on the phone vs in person? Certainly the audio quality is different and if you're expecting a 1-to-1 match where we can plot waveforms perfectly, then no, you wouldn't ever be able to do this. But that's not a fair test
And I wouldn't put the metric at 50/50, needs to be indistinguishable. It would be a reasonable amount where it sounds __like__, which could be identifying the chatbot 100% of the time! (e.g. what if I just had a roboticized version of a person's voice) Truth is that I can send you clips of the same person[0], tell you they're different people, and a good portion of people will be certain that these are different people (maybe __you're different__™, but that doesn't matter).
So use that as the litmus test in either way. Not if you think they are different, but rather "would a reasonable person think this is supposed to sound like ScarJo?" Not you, other people. Then, ask yourself if there was sufficient evidence that OpenAI either purposefully intended to clone her voice OR got so set in their ways (maybe after she declined, but had hyped themselves up) that they would have tricked themselves into only accepting a voice actor that ended up sounding similar. That last part is important because it shows how such a thing can happen without ever explicitly (and maybe even not recognizing themselves) stating such a requirement. Remember that us humans do a lot of subconscious processing (I have a whole other rant on people building AGI -- a field I'm in fwiw -- not spending enough time understanding their minds or the minds of animals).
Edit:
[0]I should add that there's a robustness issue here and is going to be a distinguishing factor for people determining if the voices are different. Without a doubt, those voices are "different" but the question is in what way. The same way someone's voice might change day to day? The difference similar to how someone sounds on the phone vs in person? Certainly the audio quality is different and if you're expecting a 1-to-1 match where we can plot waveforms perfectly, then no, you wouldn't ever be able to do this. But that's not a fair test