In fairness though, it sounds like the point you and others are often making is this. Humans are now considered dumb, bias and unreliable. So we need to invest in some kind of external policing system (AI) to run our world for us and make sure we're doing it right. Basically establish reliance on something external to ourselves?
This is sad because it sounds like we're losing faith in ourselves to evolve for the better and hope the machines can do a better job at self-improvement ?
I'm generally curious about your point of view, sometimes I'm confused with the enthusiasm people have about this aspect of AI? Is it a form of distrust and dislike of society that makes us want to put faith in robots? A kind of adult angst?
I worry because we could be barking up the wrong tree if this is the case.
Humans are just not good at doing certain kinds of tasks. We can add numbers, but nowhere near as fast as a computer can. Similarly we can see patterns in data, but not to the exact precision of a statistical model that has it's parameters optimally tuned with gradient descent and bayesian inference. Humans will never be as good as statistical algorithms at certain tasks and that's ok.
I see fear about algorithms everywhere. Previous articles insist that algorithms could be unfair or racist. This article suggests things along those lines as well. The EU recently banned perhaps the majority of applications of machine learning, in any place where they might be used to rank individuals. This fear is hugely setting back society and technological progress. And almost every one of these places will have to revert back to human judgement. Which by every measure is far worse and far less fair.
The algorithms themselves may not 'choose' to discriminate, but they certainly can be used in a way that causes discrimination due to an oversight on the part of the algorithm's designer, even if not intended.
See e.g.:
Fairness as a Program Property, Aws Albarghouthi, et al, FATML 16
http://pages.cs.wisc.edu/~aws/ (Note: I can't find the paper link, maybe the conference hasn't occurred yet, but Aws gave a pre-talk on this topic already.)
Part of the problem with algorithms is that they allows us to be sloppy in our assignment of responsibility. We think "the computer can't be biased", which is of course true, but ignore the fact that the human designer of an algorithm could have made a mistake. And because of the nature of computer programs, these mistakes can be arbitrarily subtle. The above paper (I'm recalling from the talk now) applies certain probabilistic reasoning to prove that certain kinds of programs are "fair" for a certain population distribution and for a very limited set of language features (e.g. no loops). But static analysis is a very hard problem and it is unlikely we'll ever see a solution that generalizes well to anything we'd recognize as a useful programming language.
Edit (finishing my line of thought): So certainly bias exists in either case. I'm not trying to claim that using algorithms increases bias. However, algorithms can cause the decision process to be opaque, and in that sense 'hide' the bias. Unfortunately, it seems that if we want to use algorithms in these settings, we'll need either rigorous models like the above that are amenable to static analysis, or else give up and return to where we were before.
In fairness though, it sounds like the point you and others are often making is this. Humans are now considered dumb, bias and unreliable. So we need to invest in some kind of external policing system (AI) to run our world for us and make sure we're doing it right. Basically establish reliance on something external to ourselves?
This is sad because it sounds like we're losing faith in ourselves to evolve for the better and hope the machines can do a better job at self-improvement ?
I'm generally curious about your point of view, sometimes I'm confused with the enthusiasm people have about this aspect of AI? Is it a form of distrust and dislike of society that makes us want to put faith in robots? A kind of adult angst?
I worry because we could be barking up the wrong tree if this is the case.