There is really no way to know or to say with logical justication that a data set, or the deep learning model that results from it, is free from bias. That's the issue at hand. The social dimension of the application must be discussed broadly to make any sense of it. It's a matter of opinion, and properly so. I worship the guy and I think he was attacked for things he didn't say, yet if Mr LeCun had acknowledged the limits of dnn black box models, it would have been helpful for everyone following along. Timnit Gebru should also have made the same point.
Sort of. I didn't read the whole exchange, but to me the thing to say would be to be a bit more explicit that the tool will always reflect the bias of it's owners. Hopefully phrased to sound less Marxist, but something along those lines, instead of leaving the impression that the only issue was an inadequate data set.