Unfortunately, the method OpenAI may be using to reduce bias (by adding words to the prompt unknown to the user) is a naive approach that can affect images unexpectedly and outside of the domain OpenAI intended: https://twitter.com/rzhang88/status/1549472829304741888
I have also seeing some cases where the bias correction may not be working at all, so who knows. And it's why transparancy is important.
What a fascinating hack. I mean, yeah, naive and simplistic and doesn't really do anything interesting with the model itself, but props to the person who was given the "make this more diverse" instruction and said "okay, what's the simplest thing that could possibly work? What if I just append some races and genders onto the end of the query string, would that mostly work?" and then it did! Was it a GOOD idea? Maybe not. But I appreciate the optimization.
I thought the same thing but I think the commenter is making a joke, but I could be wrong.
I think they are suggesting that things like this (neural nets etc) work using bias, and by removing "bias" the developers are making the product worse.