Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The algorithms themselves may not 'choose' to discriminate, but they certainly can be used in a way that causes discrimination due to an oversight on the part of the algorithm's designer, even if not intended.

See e.g.:

Fairness as a Program Property, Aws Albarghouthi, et al, FATML 16 http://pages.cs.wisc.edu/~aws/ (Note: I can't find the paper link, maybe the conference hasn't occurred yet, but Aws gave a pre-talk on this topic already.)

Part of the problem with algorithms is that they allows us to be sloppy in our assignment of responsibility. We think "the computer can't be biased", which is of course true, but ignore the fact that the human designer of an algorithm could have made a mistake. And because of the nature of computer programs, these mistakes can be arbitrarily subtle. The above paper (I'm recalling from the talk now) applies certain probabilistic reasoning to prove that certain kinds of programs are "fair" for a certain population distribution and for a very limited set of language features (e.g. no loops). But static analysis is a very hard problem and it is unlikely we'll ever see a solution that generalizes well to anything we'd recognize as a useful programming language.

Edit (finishing my line of thought): So certainly bias exists in either case. I'm not trying to claim that using algorithms increases bias. However, algorithms can cause the decision process to be opaque, and in that sense 'hide' the bias. Unfortunately, it seems that if we want to use algorithms in these settings, we'll need either rigorous models like the above that are amenable to static analysis, or else give up and return to where we were before.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: