The problem I have with every discussion of AI risk is that people seem terminally underinformed on what is actual reality now, or why some risk is a risk.
"AI" isn't a problem which can destroy the legal system: because it takes regular human institutions to allow such a miscarriage of justice. Which as noted, they started doing, are still doing.
So you get this weird "perpetual future" perspective where everything "AI" is going to do is solely something that the technology will cause, not its users, and the solution is always to prevent the technology existing rather then fix the system - as though the US and other jurisdictions don't have long history's of injustice for all sorts of groups.
The problems aren't new, and the solutions have nothing to do with whether you can create predictive algorithms. And "oh but what about the scale..." is just a declaration that you're aware of the problem but were pretty sure it wouldn't happen to you - because absolutely nothing else prior actually prevented it except the social privilege you inherited which means you could ignore it.
The problem I have with every discussion of AI risk is that people seem terminally underinformed on what is actual reality now, or why some risk is a risk.
"AI" isn't a problem which can destroy the legal system: because it takes regular human institutions to allow such a miscarriage of justice. Which as noted, they started doing, are still doing.
So you get this weird "perpetual future" perspective where everything "AI" is going to do is solely something that the technology will cause, not its users, and the solution is always to prevent the technology existing rather then fix the system - as though the US and other jurisdictions don't have long history's of injustice for all sorts of groups.
The problems aren't new, and the solutions have nothing to do with whether you can create predictive algorithms. And "oh but what about the scale..." is just a declaration that you're aware of the problem but were pretty sure it wouldn't happen to you - because absolutely nothing else prior actually prevented it except the social privilege you inherited which means you could ignore it.
[1] https://www.technologyreview.com/2019/01/21/137783/algorithm...
[2] https://www.wired.co.uk/article/police-violence-prediction-n...