There is a grave danger that Tesla's precocious push of autonomous features could result in a PR disaster for self-driving technology if it actually ends up killing someone.
We shouldn't have this in the wild until we're sure it's ready.
I read a really interesting book lately called "Empires of Light" about the early days of electrically. Basically, people got electrocuted all the freaking time before we really figured out how to wire things safely. At one point there was a huge tangle of telegraph and power wires haphazardly strewn together all over New York city. People would abandon old wires in place and just run new ones on top of them.
So, there's going to be some deaths. Without a doubt, before autonomous cars are fully integrated into society, there will be some deaths that would not have happened with a human driver. That's always the cost of human progress.
Of course we should do everything we can to minimize it as much as possible, but there's no way to guarantee a new technology will be 100% perfect on the first try, or the second try, or the 50th try. What scares me is that one of these deaths will happen and the public outcry will kill the whole endeavor before it ever gets off the ground. We shouldn't let that happen.
Aside from the obvious "What about (security) flaws in software of (semi-autonomous) cars", I'm especially thinking about scenarios where some sort of sensor jammer is used to blind/misguide the vehicle (laser pointers blinding pilots are already a thing, so clearly there's people willing to try it out). I have the feeling we'll hear about that in the future.
Someone is going to have to die sooner or later if this technology gets into production. In 100 years I bet people will still die due to software bugs -- but hopefully very few. The important thing is if the feature has a net reduction in total deaths, and I believe that can be said of the Autopilot features that ship today.
I think if autonomous technology ends up killing somebody, PR is the last angle we should worry about. Let's first worry about pushing a technology that, y'know, kills people.
That line of thinking is erroneous in my opinion. Autonomous technology only needs to kill a few less people than the existing manual technology to be worth debating, and it's a no brainer if it kills orders of magnitude less people.
About 1 in a million people die annually by car accident per VMT (vehicle-miles travelled) [0] and the number's declining. So if Tesla sells a thousand cars, each travelling a thousand miles that kill 2 people in the first year, they're already above the mark. If there was a person between the truck and Tesla they'd be way above the mark. Even more so when you consider "Summon Mode" isn't full self-driving and is probably only being used less than .1% the time the car is turned on.
More problems are caused by taking things too fast than taking it too slow. Full self driving cars are probably farther off than you guys seem to think.
There are somewhere around 100,000 Tesla cars on the road (as of end of 2015). If each is driven say 10,000 miles that is a total of 1000 Million miles driven. At the current US average of over 1 fatality per 100 Millon miles driven that would make 10 fatalities per year expected from Tesla drivers.
Not all Tesla cars are going to be driven in self-driving mode at first, so we can expect the numbers to look much worse early on. If only 20 percent of Tesla drivers let the cars self-drive (and of course only the newer Teslas will be capable of self driving because of sensors, etc.) we are down to 2 deaths expected by human drivers in that 20 percent.
I used to think that these kinds of numbers would act as a barrier to the development of self-driving cars, but each time one car has an accident all of the cars will learn how to avoid it the next time. Every human driver has to learn what to do around icy roads, what to do when cut off by a car in a neighboring lane, what to do when approaching a neighborhood where kids and dogs are playing ball in the front yards, but a self-driving car only has to learn once and all the other self-driving cars will know what to do too.
When I was growing up, there were around 6 or 7 times as many people killed per vehicle-miles traveled. I hope that self-driving cars won't be as dangerous as the cars of the 50's and 60's when they first hit the road. In the longer run as thousands and eventually millions of self-driving cars begin driving I expect them to improve rapidly though their shared experiences.
That is dependent on them actually being software-fixable problems.
In this case, the sensors do not actually monitor the complete space taken up by the vehicle, so this kind of accident would be impossible to prevent by modifying software.
The public may not react rationally to deaths caused by autonomous vehicles. If the technology kills people at a lower rate than the existing technology (manual control), then pushing it seems appropriate. Worrying about good PR could save lives.
>Let's first worry about pushing a technology that, y'know, kills people.
I agree. We need to get all car ads off TV and quit pushing for people to own automobiles. Pushing this technology into the hands of as many unqualified people as possible is a recipe for death and disaster, killing tens of thousands of people each year.
I'll push for any technology which will kill fewer people.
We shouldn't have this in the wild until we're sure it's ready.