An anecdote, but this is a criticism being aimed at YouTube currently for it's demonitisation scheme.
The argument being made is that even though the demonitisation of certain classifications of content is immoral (e.g. demonitising any and all LGBT content, even when family friendly), it is tacitly being allowed to happen due to YouTube's stance of "the algorithm" making the decision, not them.
Which may lead then to have a position of "supporting" LGBT as a company while simultaneously being able to demonitise said content, which could hurt their business in actively hostile to LGBT countries.
This isn't proven, but it's an example of how it could be used to absolve yourself of blame and point it at an inscrutable system.
I believe this is correct. Assume a black box model fails. To the point that the author is removed from the system (e.g. executed). How do you transition accountability to another party in a reasonable way?
I'm not following. If the author is gone, then who decided to use the model? Who decided which safeguards were put in place in case the model failed? Liability is a chain.
AI systems aren't activating themselves, they're being used because a chain of people are authorizing them based on promises made by other people. So if an AI system in a courtroom wrongly jails people, you can still hold judges accountable for using the system. You can still hold manufacturers and businesses accountable for promising a degree of accuracy that their system couldn't meet.
In the same way, if my dog bites someone, and I get sued and argue, "animal motivation is really hard, we don't know why the dog chose to do that", I'm not going to win the case. The reason it happened is because I didn't leash my dog.
The reason the black box hurt someone is because an accountable human improperly used it or inaccurately advertised it.
This is tort law, the worst defined laws we have. Liability is far more complex that and usually governed by irrational humans or arbitrary bureaucratic rules contrived by powerful people protecting their interests.
But how do black boxes change that? It's still the same people operating under the same laws.
A few commenters here posit that without a black box, designers might be held accountable. If there's a system where software manufacturers would be held accountable for bugs, is any judge going to say, "well, the software is bugged, but they don't know how to fix the bugs, so they're off the hook"?
If there's proof that Twizzlers are poisonous and kill people, the Twizzlers manufacturer can't say, "but we don't know why, we just threw a bunch of chemicals in a vat at random without writing down the labels. So therefore, it's not our fault."
I don't know, can they? I'm not a lawyer, maybe everything in our legal system is way more horrifyingly broken than I assume.
There are payment systems based on human facial recognition. If the model failed, it means losing money. Apparently you can not deny liability because of using a black-box model.
Are you implying you cannot be held accountable for black box models?