Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans are just not good at doing certain kinds of tasks. We can add numbers, but nowhere near as fast as a computer can. Similarly we can see patterns in data, but not to the exact precision of a statistical model that has it's parameters optimally tuned with gradient descent and bayesian inference. Humans will never be as good as statistical algorithms at certain tasks and that's ok.

I see fear about algorithms everywhere. Previous articles insist that algorithms could be unfair or racist. This article suggests things along those lines as well. The EU recently banned perhaps the majority of applications of machine learning, in any place where they might be used to rank individuals. This fear is hugely setting back society and technological progress. And almost every one of these places will have to revert back to human judgement. Which by every measure is far worse and far less fair.



The algorithms themselves may not 'choose' to discriminate, but they certainly can be used in a way that causes discrimination due to an oversight on the part of the algorithm's designer, even if not intended.

See e.g.:

Fairness as a Program Property, Aws Albarghouthi, et al, FATML 16 http://pages.cs.wisc.edu/~aws/ (Note: I can't find the paper link, maybe the conference hasn't occurred yet, but Aws gave a pre-talk on this topic already.)

Part of the problem with algorithms is that they allows us to be sloppy in our assignment of responsibility. We think "the computer can't be biased", which is of course true, but ignore the fact that the human designer of an algorithm could have made a mistake. And because of the nature of computer programs, these mistakes can be arbitrarily subtle. The above paper (I'm recalling from the talk now) applies certain probabilistic reasoning to prove that certain kinds of programs are "fair" for a certain population distribution and for a very limited set of language features (e.g. no loops). But static analysis is a very hard problem and it is unlikely we'll ever see a solution that generalizes well to anything we'd recognize as a useful programming language.

Edit (finishing my line of thought): So certainly bias exists in either case. I'm not trying to claim that using algorithms increases bias. However, algorithms can cause the decision process to be opaque, and in that sense 'hide' the bias. Unfortunately, it seems that if we want to use algorithms in these settings, we'll need either rigorous models like the above that are amenable to static analysis, or else give up and return to where we were before.


What is meant by ranking individuals, how and why are people being ranked? By race or financial status or similar ?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: