No it's not. People here have a really exaggerated vision of how bad human drivers are. Even the most pessimistic views of how common accidents are suggest that there is one contact collision every 75k to 100k miles or so.
An autonomous driver which was twice as dangerous as a human, then, would still go 37k to 50k miles between collisions. A human who was trying to backstop that robot's fallibilities would be required to pay close attention for weeks between actions. Which is inhuman.
I admit that I have no idea what the accident rate is in Uruguay or Brazil.
In the US:
The Department of Transportation gets reports of one accident per 250k miles (roughly). It is broadly agreed that many accidents are unreported, with estimates of the true rate ranging from 1/200k miles to about 1/75k miles.
For example, this document from the US Department of Transportation:
Wow, yes, there are a LOT less accidents in the U.S. According to other statistics, there are 5 million accidents, for a population of 300 million. In Uruguay there are 50.000 accidents for a population of 3 million, and with a LOT lower average mileage per driver.
That's something we've discussed a lot here - there's NO way self-driving cars can go around South American streets - unless they learn to be very aggressive, beep the horn, cross streets whenever they can, shout and otherwise interact with other drivers.
And we mostly don't have highways. Americans drive a lot in highways, that must skew the per mile accidents.
I don't know how often accidents like fender-benders go unreported in the U.S. though.
Before anyone chimes in, yes 5 million in 300 million is the same as 50.000 in 3 million.
What I wanted to mention is that there are a LOT more cars it the U.S., and the average US driver drives a LOT more than the average Uruguayan driver. (I'd have to look up hard numbers, but that's the gist of it)
An autonomous driver which was twice as dangerous as a human, then, would still go 37k to 50k miles between collisions. A human who was trying to backstop that robot's fallibilities would be required to pay close attention for weeks between actions. Which is inhuman.