>>After all your system has far more racism because human judges are racist. And why should someone spend twice as long in prison or be unable to get a loan because they are less attractive?
Because I don't think we can quantify the justice system. Like other commenters in this thread have pointed out - the jury system is explicitly based on the idea of being judged "by your peers", with all the biases and ideas that it brings with it. Could you replace that with an AI? Maybe - but how will you know if you reached the "correct" judgement then? In some bizzare scenario you might arrive at a situation where AI reaches a judgement that literally no one is happy with - and at that point we're just ruled by a hivemind overlord, no? I'm being sarcastic, but there is a point where we serve the algorithm and not the other way around. I mean, don't get me wrong, I would gladly submit to Culture-style Minds(Iain Banks) because I think they were being fair as described in fiction, but I have no trust that whatever we develop will be that fair, we seem to be using scattershot systems that look at the lowest common denominator and make a decision based on something that is easy and obvious, or worse, we train them on existing systems. Which(and I am sorry for bringing up the racism example again) - AI trained on the current population of the US prison system would conclude that it has to target certain groups more because...well, clearly they commit more crime! I have no trust it would be anything other than a shallow, simple-stat based oracle that everyone would listen to.
Because I don't think we can quantify the justice system. Like other commenters in this thread have pointed out - the jury system is explicitly based on the idea of being judged "by your peers", with all the biases and ideas that it brings with it. Could you replace that with an AI? Maybe - but how will you know if you reached the "correct" judgement then? In some bizzare scenario you might arrive at a situation where AI reaches a judgement that literally no one is happy with - and at that point we're just ruled by a hivemind overlord, no? I'm being sarcastic, but there is a point where we serve the algorithm and not the other way around. I mean, don't get me wrong, I would gladly submit to Culture-style Minds(Iain Banks) because I think they were being fair as described in fiction, but I have no trust that whatever we develop will be that fair, we seem to be using scattershot systems that look at the lowest common denominator and make a decision based on something that is easy and obvious, or worse, we train them on existing systems. Which(and I am sorry for bringing up the racism example again) - AI trained on the current population of the US prison system would conclude that it has to target certain groups more because...well, clearly they commit more crime! I have no trust it would be anything other than a shallow, simple-stat based oracle that everyone would listen to.