I was talking about testing the application rather than training the model. You can train a ML model on a diverse range of faces, but if, for example, an application that uses the model doesn't calibrate the camera properly it still won't work for darker faces.
The different accuracy of facial recognition by demographic group is common across multiple facial recognition systems. It's highly unlikely that all of the affected systems owe their inaccuracy to a post-training configuration issue.
To me it seems more like you are arguing for the sake of arguing rather than that you actually believe in your point. You are just trying to conceive of imaginative possibilities that preserve the "dumb facial recognition developers forgot non-white people exist" interpretation. The problem is that your imagined possible alternatives are not plausible explanations.
I gave a single example of why an app might fail at facial recognition in some cases. Obviously I wasn't suggesting that would be the case for every app that doesn't work. Suggesting I was is a strawman of ridiculous proportions.
I also didn't say anything about ML training data in the post you replied to. My point was about facial recognition app developers not testing with a diverse range of inputs, which is why we see apps failing in public when people try to use them with inputs the app hasn't been tested against. If the apps were well tested those cases would have been caught before the public ever saw them.
My original point stands - if your testing sucks then it's very likely that your app will suck too. There will be problems with it that you don't know about, and the users will find them very quickly.
Do you have any data showing facial recognition doesn't work well on black people? My understanding is that this is an urban legend - it's not true, the algorithms work fine assuming a decent quality camera and video (same for everyone).