Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The core distinction is understanding that bias in ML is not _just_ bias in the data. It may be true that we can reduce bias in this model by reducing bias in the training data, but there is a deeper, more fundamental problem of bias that will not be solved just by changing the training data. Marginalizing the discussion by pointing out that this case would benefit from less biased data is unproductive.


> Marginalizing the discussion by pointing out that this case would benefit from less biased data is unproductive.

Except for the part where it provides an actual workable solution to the problem at hand.

To me this is an argument between completely different mindsets, one that restricts itself to provable facts and one which restricts itself to political agendas. I don't see how the latter can also work in facts. Or belongs in a technical research discussion at all frankly. You want to make laws that force companies to produce identical/equivalent outcomes for every race somehow? Just go lobby for it. Perhaps it's a good idea. You aren't going to reprogram mathematicians to think in political terms instead of mathematical terms.


What are some examples of racial bias in ML models which cannot be solved by just changing the training data?


I asked another commenter as well, but what are the proposed solutions then? People are obviously upset about ML and bias, is there a place I can get a summary of actionable next steps to lessen bias in ML?


There’s not one neat trick to make it go away. There have been a number of fairness and bias workshops and forums in recent ML conferences. There’s also a growing podcast and book collection on the topic. Timnit Gebru (mentioned in the OP article) has published and participated in a bunch, maybe start there.


To be frank phrasing it like that makes it sound like Gebru has an ulterior motive in trying to cash in books/speakers fees/consulting.

Even if sincere good faith and he has real expertise is assumed that approach kind of raises several "huckster alert" red flags.


This lack of actionable improvements or concrete guidelines reminds me a bit like needing "political officers" in Marxist military units who ensure "compliance".


> This lack of actionable improvements or concrete guidelines reminds me a bit like needing "political officers" in Marxist military units who ensure "compliance".

Sure, you can look at it that way if you like, that commonly results in hiring someone to be responsible for D/I without actually making any other changes.

A better response is more along the lines of "Not in MY Army" which makes it everyone's responsibility at every level.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: