Right - like I said, fighting equality of opportunity is worthy and must be pursued relentlessly.
What seems to be happening (the impression I get) is something analogous to this:
I have a game released on two mobile platforms - android and iOS. The iOS players are disproportionately scoring better than the android players.
Some are suggesting that the server is bugged/programmed to give iOS players better scores. I reject this.
Some are saying that the android user experience has conditioned those players to be satisfied with a low score. I reject this.
Some are saying iOS players are superior because they bought an apple product. I reject this.
My opinion that it is likely that the android release has a lot of performance issues which impede the player's ability to get a high score on that platform. I test this out by doing things like getting an android player to play on an iOS device, and checking if their performance improves. I do the reverse and check accordingly too. I do various tests on fps, input lag, etc.
If my experiment fails to show anything conclusive, I should be open to re-evaluating some of the things I rejected, but I refuse to do it out of zealous belief in my hypothesis. The answer must be optimization, I insist.
If my experiment succeeds, I should tweak the android code till the disparity goes away. Instead, I just hardcode +500 to the android scores and pat myself on the back.
There seem to be cases where the diversity disparity is "solved by hardcoding", and other cases where the data doesn't support the hypothesis, but there is an ardent refusal to accept it, as if it would be blasphemous to suggest otherwise. These are pretty much the only things I take issue with.
And now assume that your app is competing with another app and has an issue of Android users frustrated about the low scores.
So while you are figuring out heads down in a profiler why your Android users could be more challenged, your competitor simply hardcodes the +500 to the score and makes a big PR announcement how they restored the justice to the previously discriminated users.
Guess who's getting more sales next quarter. Unfortunately that's how politics works.
If my experiment fails to show anything conclusive, I should be open to re-evaluating some of the things I rejected, but I refuse to do it out of zealous belief in my hypothesis.
You are arguing a straw man. The "experiments" in this case are showing ample evidence of biases, so as long as those biases exist the underlying hypothesis can by definition not be falsified.
Perhaps I am not very well read on the subject. My argument was rooted in the attitude I came across when this story was trending (linked at the end of this comment). This research might have been disproved later on, I never followed up on the story. What I was referring to was the readiness with which some were willing to stop pursuing this sort of trial.
My intent wasn't to strawman - if this is in fact non-existent then I concede that point.
Which experiments? I'm not aware of much in the gender diversity 'sciences' that would rise to the level of being called an experiment, except for the sort of neuro-psychology that Damore cited and which he got slated for (because they show biological reasons for differences in subject interest)
What seems to be happening (the impression I get) is something analogous to this:
I have a game released on two mobile platforms - android and iOS. The iOS players are disproportionately scoring better than the android players.
Some are suggesting that the server is bugged/programmed to give iOS players better scores. I reject this.
Some are saying that the android user experience has conditioned those players to be satisfied with a low score. I reject this.
Some are saying iOS players are superior because they bought an apple product. I reject this.
My opinion that it is likely that the android release has a lot of performance issues which impede the player's ability to get a high score on that platform. I test this out by doing things like getting an android player to play on an iOS device, and checking if their performance improves. I do the reverse and check accordingly too. I do various tests on fps, input lag, etc.
If my experiment fails to show anything conclusive, I should be open to re-evaluating some of the things I rejected, but I refuse to do it out of zealous belief in my hypothesis. The answer must be optimization, I insist.
If my experiment succeeds, I should tweak the android code till the disparity goes away. Instead, I just hardcode +500 to the android scores and pat myself on the back.
There seem to be cases where the diversity disparity is "solved by hardcoding", and other cases where the data doesn't support the hypothesis, but there is an ardent refusal to accept it, as if it would be blasphemous to suggest otherwise. These are pretty much the only things I take issue with.