Basically (to summarize a lot), his point was: an ML model is only as good as the data you feed it. If, say, the photos for your face recognition model are only of white men, then obviously the model will do well on white men, while (possibly) not doing as well on other races or genders. This is a statement of fact, and nothing controversial about it.
But she took offense to it, and started insulting him on Twitter. He got tired of defending himself, and just quit twitter.
She was never uncivil and she pointed out a series of issues, from what she believed was imprecise in his argument, to the societal issues that caused her to be viciously attacked by other people for engaging in the discussion.
Other people questioned whether data only can be accountable for the biases and were not subjected to the kind of vitriol she was. The fact I am a white cishet male shields me from a lot of misdirected anger.
Not just that, she never provided any links to tutorials/papers/etc that she had given on the topic, and when I looked into the one workshop (from memory) that someone else mentioned, it had literally nothing to do with the issue of dataset bias.
That episode gave me the impression she was more interested in drum-beating and axe-grinding than engaging constructively. I'm not surprised she seemed to be doing the same inside Google.
Basically (to summarize a lot), his point was: an ML model is only as good as the data you feed it. If, say, the photos for your face recognition model are only of white men, then obviously the model will do well on white men, while (possibly) not doing as well on other races or genders. This is a statement of fact, and nothing controversial about it.
But she took offense to it, and started insulting him on Twitter. He got tired of defending himself, and just quit twitter.