It bothers me, though, that the standard for automation is that it must /never/ hit a parked car, not "at least as good as the average human" or "at least as good as the 95th percentile human" etc.; I don't know enough to judge what's going on in this situation, but if the technology saves more lives/property/etc than it damages, IMO it's worth adopting.
Agreed. Zero tolerance (or 100% reliability) necessarily has infinite cost and/or takes infinite time. We need to be reasonable about our expectations for autonomous systems.
That said, what's the likely accident rate for a 95th percentile human, starting in a parked car, hitting another stationary vehicle parked directly in front of them? There must be a few "accidentally put it in drive instead of reverse" type incidents but I'd except it to be exceedingly rare.
The swimming pool near me had to replace their brick wall at the front of their property 3 times in six months. The culprit each time was grandparents dropping off grandkids.
It's a pretty rare incidence given good conditions and a competent driver. Add a driver impairment, fog, etc and it becomes more plausible. It's all heresay until we look at some insurance claim data, though.
The pool has now installed steel bollards in front of each parking spot, by the way.
To be fair, an autonomous vehicle will probably also never accidentally put the vehicle in drive instead of reverse. The particular failure modes are likely to be radically different in many cases, so it seems reasonable to gloss over their individual differences and talk about them in aggregate.
The difficulty will be in assigning responsibility for these accidents. Will autonomous car manufacturers carry the insurance burden for their software or the consumer? Will insurance companies have to evaluate whether vehicles have been "jailbroken"?
Statistical analysis fails for small samples. In a single case, it is never possible to determine what would have happened if there was a human controlling the wheel instead of the autonomous system. With either of those, the accidents will happen, even if rarely. In case of a human controlling the wheel, the punishment meted out to the human acts as a signal to that human and others that they have to be more careful in how they control the vehicle. Therefore, in case of any accident by Tesla's autonomous system, Tesla(or any other autonomous control providing company) should be made to shoulder the blame. So that they are not only prodded to make their systems more robust, but also prodded to design the system to ask for human intervention in case it senses it cannot make a good judgement in the conditions.
True and I generally agree. But hitting a parked car I would expect to be extraordinarily rare for an autonomous vehicle. Isn't that the most basic test?
First, no one expect automation to be perfect, but people do have a reasonable expectation of it being much better than an average human driver. Most accidents (in good weather conditions) happen when drivers are distracted, tired or sick. This does not apply to an automated system, and even when something goes wrong the system should go into a fail-safe mode (in this case - stop).
Second, "at least as good as the average human" is a bad benchmark. Not because an average human is so bad at driving, but because people make high-level judgements about acceptable risks. For example, you are much, much less likely to dent your bosses Porch than some random car. AI is equally likely to hit either.
the human average is around 185 crashes and 1 fatality per 100 million miles, which is pretty damn impressive considering the huge variation in terrain and skill. I'll be very surprised if any self driving tech right now can even dream of coming close to these stats.
And that's just the human average, which isn't actually representative of anything because the 'average driver' does not exist. I don't have any statistics but I'm pretty sure the majority of accidents/incidents are distributed over a small minority of drivers. I remember a recent article [1] about some statiscal research into self-driving cars that indicated that at least 275 million miles of autonomous driving, in all conditions, without serious accidents, are needed to conclusively prove that they are safer than human drivers.
Statistics are always difficult and hard to translate to conclusions, but in the case of autonomous cars it seems like advocates really willing to bend them to the extreme to make a point about how autonomous cars will be safer than humans, even though it's impossible to say anything sensible about that except that 'the average driver' as a goal for safety seems like a very bad target to aim at.
I don't have any statistics but I'm pretty sure the majority of accidents/incidents are distributed over a small minority of drivers
That's a good point. It makes sense that the standard we want self-driving cars to achieve is the crash rate on average driving conditions, not the average driving crash rate. The latter is lowered by drunk driving, drugged driving, joyriding and other willfull neglect.
It bothers me, that Tesla can claim "beta" on a feature they've enabled on real-world consumer cars. This isn't about never hitting anything, this is about dodging liability by claiming the feature shouldn't have been used.
If it's available to a regular consumer (as opposed to, say, a test driver), it's deployed and will be used.