A libel case resulting from a situation like this could be important in setting precedent for how responsible companies need to be with (for?) their AI.
Simple: penalize the lack of oversight unless it can be proven that an AI/algorithm is significantly superior at performing a task than a human. So if a human trucker is asleep at the wheel while the AI drives and the truck crashes, fault the driver (and possibly company if policy) for negligence. If the driver is awake and the AI glitches out and the driver does the best they can to rectify the situation but still results in a crash, then it it was it is: a mistake.
The question is whether you are rating superiority on the overall set of classifications, or the smaller set of problems. I suspect it's a much easier task to make an AI that's better at reducing accidents than the general public, than it is to do that and also be better given a specific set of error conditions, such as "in a direction with a glare, and with roadwork and changed road conditions, correctly notice that the dog running ahead off to the side with people chasing it might be a situation that could spill into the road..."
In this case though, I imagine they allocated far more resources towards positive correlations than exclusions, since they just want a way to get their name in front of you.