Hacker News new | past | comments | ask | show | jobs | submit login

We seems fine with a future where AI becomes integrated into our legal system. Here is an excellent opportunity for AI to identify potential innocent people.



I have seen no evidence that supervised or unsupervised ML or "AI" has led to any more accurate application of justice.

I'm not fine with integrating it into the legal system, not while it's vulnerable to the current biases of law enforcement and the justice system. And it would be worse, because once those biases are codified in the machine, there will be a bias toward trusting the evaluation of the machine over a human.


How? What could AI possibly have done here?


I’d assume it should have less bias on any given case and we’d hope cases like this would be flagged by that system somehow. However, completely unbiased machine learning seems like something as hard to achieve as being completely unbiased ourselves.


AI tends to have both the biases of the programmers plus various unknown biases that it creates itself from the input data.

"The evidence photos are all on bright sunny days and contain blue rectangular objects in the top left, probably guilty."


The actual killer confessed, I think that is already a "flag".


AI could find a pattern and raise a flag so someone can review the case.


The AI would still have to make use of the data that is given to it. The prosecutors hid that data. Are you saying the AI should spy on prosecutors, too?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: