If it's so easy to fix the bias, then why isn't it fixed, always? That should be the expectation.
More likely, the reason that bias isn't routinely fixed is that it isn't easy, and these kinds of biases do make it into production systems. Which makes it a net positive in my view that the occasional shitstorm reminds society of this fact.
Can that be unpleasant for AI researchers? Sure... but if it bothers then, then perhaps they could focus their research on trying to fix the problem?
Physicists had and have unpleasant conversations about their moral responsibility for nuclear weapons. Other fields of research should take their moral responsibilities seriously as well (not just AI research, by the way).
More likely, the reason that bias isn't routinely fixed is that it isn't easy, and these kinds of biases do make it into production systems. Which makes it a net positive in my view that the occasional shitstorm reminds society of this fact.
Can that be unpleasant for AI researchers? Sure... but if it bothers then, then perhaps they could focus their research on trying to fix the problem?
Physicists had and have unpleasant conversations about their moral responsibility for nuclear weapons. Other fields of research should take their moral responsibilities seriously as well (not just AI research, by the way).