> Given the publicity such problems have gained in the community, one would expect publishers of any model to verify it doesn't fail with the most obvious examples. Not doing so is negligent at best.
Here's what the authors say about their own work:
> PULSE makes imaginary faces of people who do not exist, which should not be confused for real people. It will not help identify or reconstruct the original image.
Furthermore, this author appears to describe themselves as a more of an artist and hobbyist. This isn't someone making some kind of statement about ML research. This is someone playing with computerized art, and the entire social media tech mob dogpiles his work over what exactly?
The negligence here is on the part of everyone getting their hackles raised over nothing.
If they are in here I hope they don't misconstrue my linking this image. I think they explained themselves very well and I feel bad that they were thrust into the middle o f this controversy.
My point is that while diversity might help, you need a lot more than that to address this problem.
We can say bye-bye to our nice demos and pre-trained models after this debacle. Who's going to risk their ass just to be caught with some unknown bias?
> The negligence here is on the part of everyone getting their hackles raised over nothing.
To be clear no one (or at least no one of note) has their hackles raised over this specific dataset. PULSE is fine for what it is, and no one criticized PULSE for having these results.
It is however a great demonstration for laypeople about how ML models aren't magic and don't always do what you, as a human, would expect. This is true irrespective of the source of that unexpected behavior.
That said, I believe the disclaimer you mention was added only after the recent twitter discussion.
This all may be true but there’s an issue when the model produces blue eyes when presented with a blurred African American face. I think much of the controversy would be diffused had the authors addressed this directly and used it as a way to discuss how bias sneaks into our models of the world.
Philosophically I find ML’s tendency to reflect the biases we bring to it very revealing. In some ways it shows us what we’ve built, the underlying biases that we’d rather argue about and ignore. When an algorithm selects longer sentences for black men than white men, we rightly see that as racism. Some say, “use better data, we’ll then be objective!” But I wonder if a better initial reaction is, “wow, look how badly out system has failed that it would produce such a bad dataset.” Never mind that maybe it’s not actually possible to be objective and that’s the point. Math doesn’t lie, maybe when we make a racist model we’re failing to see the mirror it’s holding up for us.
Yes, they could have picked a classification model with social impact instead of a GAN. GANs are mostly toys for art and image augmentation. The bulk of models are supervised classification.
Here's what the authors say about their own work:
> PULSE makes imaginary faces of people who do not exist, which should not be confused for real people. It will not help identify or reconstruct the original image.
Furthermore, this author appears to describe themselves as a more of an artist and hobbyist. This isn't someone making some kind of statement about ML research. This is someone playing with computerized art, and the entire social media tech mob dogpiles his work over what exactly?
The negligence here is on the part of everyone getting their hackles raised over nothing.