Right, that's some kind of human interest journalist fudging and it's not true. But bias/surprising wrong answers in ML is obviously a real problem and fixing the data is not always the right answer. You might not be able to tell what's wrong with the data, or where you could get any more of it, and you might be reusing a model for a new problem and not have the capability to retrain it.