Tesla: "the incident occurred as a result of the driver not being properly attentive to the vehicle's surroundings while using the Summon feature or maintaining responsibility for safely controlling the vehicle at all times."
That's the "deadly valley" I've written about before - enough automation to almost work, not enough to avoid trouble, and expecting the user to take over when the automation fails. That will not work. Humans need seconds, not milliseconds, to react to complex unexpected events. Google's Urmson, who heads their automatic driving effort, makes this point in talks.
There is absolutely no excuse for an autonomous vehicle hitting a stationary obstacle. If that happened, Tesla's sensors are inadequate and/or their vision system sucks.
Precisely. People can harp on about 'oh but it says to pay attention whilst using it' and 'oh but it's still in beta' all they want, but good engineering in the real world, where shit kills people regularly, means engineering out all possible human-machine failure modes.
This is the result of applying 'she'll be right' cavalier software engineering attitudes to real-world engineering (civil, mechanical, structural, etc.) because in the vast majority of cases, software engineering is completely erasable and is fixed with a simple refresh. Agile doesn't work in the real world. It hasn't ever worked in the real world and that's why waterfall approaches and FEED is so engrained - it's proven to be the best method to avoid killing people and creating garganutan cock-ups.
Tesla should not include features that are 'beta' in a car. Tesla should not include features that promote inattentive operation of the vehicle if they are not absolutely, several-sigma robust. It doesn't matter how many disclaimers you slap on it or how cool and futuristic it is, the feature fundamentally encourages people to stand away from their vehicle, press a button, do something else and have it magically arrive next to them. This is bad engineering design no matter how you try to spin it as 'innovation' and doesn't cut it in the real world.
edit: The entire aviation industry is a textbook in this concept.
- 7 million vehicles by multiple brands were equipped with Takata airbags that could blast out metal shards (2 deaths and 30 injuries reported).
- 30 million GM vehicles were recalled for faulty ignition switches that could shut down the engine while driving, plus prevent the airbag from deploying (at least 124 deaths in accidents where that happened).
"Stericycle, a recall consultant and service firm for automakers, said there have been 544 separate recalls announced [in 2014]"
Damn. You just reminded me that I received a recall notice for having Takata airbags in my car. I need to figure out when that recall is available (the letter said it was yet to be determined).
The funny thing is that, for decades, there've been jokes back and forth about "if automotive engineering was like software development".
Now we're seeing what it's really like when automotive engineering and software development come together. And it's sometimes really similar to the jokes. :P
If the statistical reliability is better than a human driver, then this is good engineering.
"Man bumps car into trailer" would not have been a headline, because this happens so often it's completely boring. Notably the cost of failure here is property damage. If the car were moving fast enough to endanger human life, then a driver would be behind the wheel, and fully responsible by law and common sense for the motion of the vehicle.
These headlines are going to become more common as autopiloted cars become more popular. It is important to frame them in the context of transitioning from a system that is also unreliable-- the human nervous system.
We should also expect to see a few machine failures in specific situations where a human could have avoided damage, but we must also consider them against the easily avoidable mistakes that humans make every day, which a machine can avoid with near 100% reliability.
People are afraid of airplane crashes because they're dramatic, scary, newsworthy, and out of the passengers' control, but the complete story is that stepping on a plane is a safer activity than driving to the airport. Headlines about autopilot failures will get clicks for the same reasons, but if not framed with statistics, it's just noise.
I am not sure what your point is. The statistical probability of a human crashing into a stationary object while parking is pretty high. If the statistical probability of the machine crashing into a stationary object while parking is lower, then it is sensible (and even advantageous) to allow the machine to do the job, even if it is not 100% perfect at it.
i would say that is not quite true. i know of a number of cases where people where getting out of their stopped automatic car (brakes applied) and forgot to put it into park before standing up.
Also people forgetting to use the handbrake in manual transmission cars and the car rolls away after some time etc or when they put it into neutral instead of park.
In other words, move fast and break things doesn't apply to shit that can kill people when it does break. Many of SpaceX's problems seem to stem from the same mentality. I'm a fan of both, fwiw.
But it's slightly different to assume a risk into an ambitious space program where everybody know that risk exists than to put a beta functionality that remotely move a 2 tons of steel into random consumer's hands.
If they follow their trend, any major new tech they trial fails the first few times. Two or three rockets blew up launching. Two or three rockets blew up landing.
You probably don't want the first two or three crew-rated capsules failing to stay pressurised.
It bothers me, though, that the standard for automation is that it must /never/ hit a parked car, not "at least as good as the average human" or "at least as good as the 95th percentile human" etc.; I don't know enough to judge what's going on in this situation, but if the technology saves more lives/property/etc than it damages, IMO it's worth adopting.
Agreed. Zero tolerance (or 100% reliability) necessarily has infinite cost and/or takes infinite time. We need to be reasonable about our expectations for autonomous systems.
That said, what's the likely accident rate for a 95th percentile human, starting in a parked car, hitting another stationary vehicle parked directly in front of them? There must be a few "accidentally put it in drive instead of reverse" type incidents but I'd except it to be exceedingly rare.
The swimming pool near me had to replace their brick wall at the front of their property 3 times in six months. The culprit each time was grandparents dropping off grandkids.
It's a pretty rare incidence given good conditions and a competent driver. Add a driver impairment, fog, etc and it becomes more plausible. It's all heresay until we look at some insurance claim data, though.
The pool has now installed steel bollards in front of each parking spot, by the way.
To be fair, an autonomous vehicle will probably also never accidentally put the vehicle in drive instead of reverse. The particular failure modes are likely to be radically different in many cases, so it seems reasonable to gloss over their individual differences and talk about them in aggregate.
The difficulty will be in assigning responsibility for these accidents. Will autonomous car manufacturers carry the insurance burden for their software or the consumer? Will insurance companies have to evaluate whether vehicles have been "jailbroken"?
Statistical analysis fails for small samples. In a single case, it is never possible to determine what would have happened if there was a human controlling the wheel instead of the autonomous system. With either of those, the accidents will happen, even if rarely. In case of a human controlling the wheel, the punishment meted out to the human acts as a signal to that human and others that they have to be more careful in how they control the vehicle. Therefore, in case of any accident by Tesla's autonomous system, Tesla(or any other autonomous control providing company) should be made to shoulder the blame. So that they are not only prodded to make their systems more robust, but also prodded to design the system to ask for human intervention in case it senses it cannot make a good judgement in the conditions.
True and I generally agree. But hitting a parked car I would expect to be extraordinarily rare for an autonomous vehicle. Isn't that the most basic test?
First, no one expect automation to be perfect, but people do have a reasonable expectation of it being much better than an average human driver. Most accidents (in good weather conditions) happen when drivers are distracted, tired or sick. This does not apply to an automated system, and even when something goes wrong the system should go into a fail-safe mode (in this case - stop).
Second, "at least as good as the average human" is a bad benchmark. Not because an average human is so bad at driving, but because people make high-level judgements about acceptable risks. For example, you are much, much less likely to dent your bosses Porch than some random car. AI is equally likely to hit either.
the human average is around 185 crashes and 1 fatality per 100 million miles, which is pretty damn impressive considering the huge variation in terrain and skill. I'll be very surprised if any self driving tech right now can even dream of coming close to these stats.
And that's just the human average, which isn't actually representative of anything because the 'average driver' does not exist. I don't have any statistics but I'm pretty sure the majority of accidents/incidents are distributed over a small minority of drivers. I remember a recent article [1] about some statiscal research into self-driving cars that indicated that at least 275 million miles of autonomous driving, in all conditions, without serious accidents, are needed to conclusively prove that they are safer than human drivers.
Statistics are always difficult and hard to translate to conclusions, but in the case of autonomous cars it seems like advocates really willing to bend them to the extreme to make a point about how autonomous cars will be safer than humans, even though it's impossible to say anything sensible about that except that 'the average driver' as a goal for safety seems like a very bad target to aim at.
I don't have any statistics but I'm pretty sure the majority of accidents/incidents are distributed over a small minority of drivers
That's a good point. It makes sense that the standard we want self-driving cars to achieve is the crash rate on average driving conditions, not the average driving crash rate. The latter is lowered by drunk driving, drugged driving, joyriding and other willfull neglect.
It bothers me, that Tesla can claim "beta" on a feature they've enabled on real-world consumer cars. This isn't about never hitting anything, this is about dodging liability by claiming the feature shouldn't have been used.
If it's available to a regular consumer (as opposed to, say, a test driver), it's deployed and will be used.
Reminds me of Air France 447. When you train people to do a task using a robot, and they're used to using the robot 90% of the time, when the robot decides its had enough and hands back control unexpectedly bad things can happen.
Asiana 214 [0] is also relevant. The aircraft went into that partly disabled the auto-throttle. The pilot expected the engines to spool up automatically, but they didn't.
A key factor was that Asiana pilots were actively discouraged from hand flying the jet throughout most of the approach. They sometimes refer to younger pilots as "Children of the Magenta Line" [1] because of their over-reliance on the LCD flight director to fly the jet.
A similar situation could easily occur with Tesla autopilot which works as expected 99.9% of the time, with the driver caught off guard in the 0.1% of the time it doesn't work properly and causing a crash.
Ironically, AA has since made some efforts to scrub those video lectures from the internet, because they came from an advanced training course that was partly blamed for the crash of AA flight 587:
The NTSB indicated that American Airlines' Advanced Aircraft Maneuvering Program tended to exaggerate the effects of wake turbulence on large aircraft. Therefore, pilots were being trained to react more aggressively than was necessary.
AF447 story is amazing. The pilots where pushing pitch lever opposite directions and fly by wire system would cancel those out. Meanwhile the plane was tanking down into the ocean and captain was sleeping until last 10 seconds before crash.
I think autopilot confidence-mood light should be placed on such vehicles. This way you could predict what is going to happen with the vehicle.
Do you mean a dash indicator light or the ambient lighting itself? Personally, I think that a cockpit with ambient lighting that adjusts based on autopilot status would be a much better indicator. When the autopilot is engaged and everything is copacetic, the lighting in the entire cockpit could be in a dim red color. As the autopilot gets less confident in the situation, the ambient lighting could get more yellow, with an increase in brightness.
The autopilot losing enough confidence and wanting to be turned off should blink between two colors and sound an audible alarm until the pilot presses a button to acknowledge the autopilot wants to turn itself off.
One interesting detail is that accelerating an A330 to M0.90 would probably not significantly damage the aircraft, if at all. For a unreliable airspeed issue, 85% thrust with 5 degree nose high attitude would keep the jet in the air. Keeping the nose much higher than 5 degrees risks a deep stall, which can easily result in a fatal crash.
There's a similar concept at play for medical devices. One part of IEC60601 specifies three alarm system priorities, and the colors get scarier, the visual indicators flash faster, and the tones get more urgent the higher priority the alarm is.
To be fair, captains gotta sleep sometimes. Thats why copilots exist. Its more important for the captain to trust his copilots in a strange situation. I would argue that more errors have been made by captains / managers who start micromanaging when they panic, instead of trusting their lieutenants.
The captain woke up eventually, returned to the front to check out what was going on, and then trusted his pilots to explain to him what was going on. Unfortunately, one of his pilots was incompetent and his actions was confusing the other (actually competent) pilot.
The issue is that the ignorant copilot didn't know the issue about stalling out, nor how to get out of a stall. It takes specific training to pull a plane out of a stall. It goes against human instinct (you have to push the plane down, THEN after certain velocity is reached, pull up later).
The competent pilot was overruled by the incompetent one (the incompetent one was in the primary seat). The rest is history.
Pulling up immediately is the human panic response that needs to be "trained out". And its what one of the two pilots did in the situation.
Pulling the stick all the way back is the normal stall recovery procedure for an Airbus 330. That's totally different from standard piloting. With the flight control system in normal law, the control system protections will keep the angle of attack below a stall, and will increase throttle to get airspeed if necessary.
AF447 had ice-clogged pitot tubes, and the pilots lacked reliable altitude and airspeed info. Redundant sensors reported conflicting data. The flight control system dropped back to "direct mode", where the control stick (which is a little joystick-like lever) directly moved the control surfaces. In that mode, pulling back the stick stalls the airplane.
Hitting the pilot with a mode change like that in the middle of an emergency was part of what caused the disaster.
Re pulling the stick all the way back: Yes, that sounds weird, but that's what the manual says. [1], "A330/A340 FLIGHT CREW TRAINING MANUAL OPERATIONAL PHILOSOPHY -- FLIGHT CONTROLS" p. 15.
"High AOA protection is an aerodynamic protection: The PF will notice if the normal flight envelope is exceeded for any reason, because the autopitch trim will stop, the aircraft will sink to maintain its current AOA (alpha PROT, strong static stability), and a significant change in aircraft behavior will occur.
If the PF then pulls the sidestick full aft, a maximum AOA (approximately corresponding to CL Max) is commanded. ...
If the angle-of-attack still increases and reaches ALPHA Floor threshold, the A/THR triggers TOGA thrust and engages (unless in some cases of one engine-out)."
What this says is that, in normal law, pulling the stick all the way back will give you the maximum pitch currently allowed, and if the pitch is higher than that, the engines go to full power automatically. That's a stall recovery.
But AF447 didn't have good sensor data for airspeed. That disabled this automatic recovery capability. (There are alarms and displays when this happens.) With the control system unable to assist the pilot, pulling the stick all the way back was totally wrong. Also, the AF447 pilots thought they were in an overspeed condition, while in fact they were stalled.
(I'm not a pilot; I used to work for an aerospace company.)
It (the A/THR triggers TOGA thrust and engages) is not a stall recovery.
From Flying Training: "Stalling will occur whenever the critical angle of attack is exceeded, irrespective of airspeed. The only way to recover is to decrease the angle of attack (i.e. relax the back pressure and/or move the control column forward)."
This is referring to the aerodynamic stall, which is what caused AF447 to crash, not an engine stall.
I'm not an Airbus driver, but it's my understanding that normal law provides stall protection, and pulling the stick full aft will give you the highest angle of attack consistent with not stalling the aircraft. This is what is being discussed in the paragraphs you quote. It is, however, possible to stall the Airbus in alternate law, in exactly the same way as it happens in any other aircraft. AF447 was flying in alternate law.
In summary: increasing thrust is not what breaks the (aerodynamic) stall (but it is used to minimize height loss). Reducing AOA by pitching forward breaks the stall, and that is done on an Airbus by pushing forward on the sidestick.
Was going to say exactly this. Pilots do practice recovery training (with normal stall recovery as well as in adverse conditions), and I'm sure all of the pilots on board were aware (or at least should have been aware) of the correct procedure.
Airbuses are weird beasts though, and I have to say that as a pilot I find the normal stall recovery procedure to be completely counter intuitive. The compounding problems with the pitot tubes being iced over plus seeing St. Elmo's Fire due to improperly routing the aircraft directly into a thunderstorm certainly didn't help matters.
Perhaps what's happening is that Airbus realized in the aftermath of AF447 that the idea of "pull up, the airplane will do what's right to save you" wasn't a good one and changed their training materials. I don't know for sure as I'm not terribly familiar with Airbus. I do think that the whole input averaging is an awful idea though.
Hitting the pilot with a mode change like that in the middle of an emergency was part of what caused the disaster.
That's true, but mostly because the guy flying AF447 wasn't a "pilot", he didn't understand how to properly fly the aircraft. Like most Airbus pilots, he was a "system manager" and spent thousands of flight hours watching while the aircraft itself did most of the work. When the automation failed, he simply didn't know what to do to fly the plane by hand.
From what I've read (and I'm not a pilot either), the proper response to unreliable or conflicting airspeed data for the A330, when you're at high altitude (not trying to land or take off or anything like that) is simple. It's something like:
85% power, wings level, 5 degrees nose up pitch
It's not a stretch to ask a pilot to remember something simple like that. Even when stressed.
Of course, it is a stretch to ask airlines to either hire competent pilots, or at the least to periodically put pilots into stressful situations like that in the simulator. They won't do it because simulator time is too expensive. It's more cost effective for Air Chance to lose an occasional plane than to make sure they have competent and trained people in the cockpit.
You're tone is hash, but you're correct in pointing out that lack of basic flying skills was a factor in AF447.
Unusual Attitude Recovery [0] in suitable aircraft (L-39 jet trainer is somewhat common) has been made available to airline pilots.
They do make airline simulator sessions somewhat stressful for airline pilots. Capt. Robert "Hoot" Gibson called it "death by simulation" when he trained to fly the Space Shuttle. They still gave him a workout in the simulator to fly a Southwest 737.
Yes,it does. The problem is that the pilots didn't know that they were in stall because of conflicting readings from their instrumentation due to a failure in the pitot-static system. Pitot-static runs airspeed/altimeter/vertical speed indicator.
Doesn't get much more wrong than that, poor guys. Is there some kind of G/inertia meter that could have clued them up? To see an overspeed + climb but the freefall indicator is going crazy.
The VSI (vertical speed indicator) is the "freefall indicator" of which you speak. :)
The problem is that when there's the pitot-static failure the VSI (generally) indicates by "sticking" wherever it was when the failure occurred.
"Speed" means a different thing to a pilot than it does to a non-pilot. Groundspeed is irrelevant (aerodynamically), airspeed is everything. To sense "overspeed" you have to know airspeed. And since airspeed is a measure of the movement of the aircraft relative to the medium it's moving through (air), there has to be a system to measure that movement. That system is the pitot-static system. The pitot tube sticks into the uninterrupted airflow, usually on the wing somewhere, the static port sits where the air is calm, usually on the fuselage. The airspeed indicator in particular relies on the differential between the two inputs to determine airspeed.
Sure, captains have to sleep - the night before the flight, not one hour after partying and staying awake the whole next day. Also, he picked a time to nap when clearly his younger charges were not too confident about approaching weather. He just needed to stay focused 15 minutes longer. I don't know if Capt. Marc Dubois was drinker, but I am assuming he probably had a few given how pervasive alcohol is in today's world, not to mention pilots and romanticizing drinking and flying 'war stories'. I know a few pilots, and like other drinkers, they play down drinkings effects from the night before on their ability to fly.
The original OP post is about automated systems should not crash into an obstacle even if a driver summons it to do so. The AF447 article builds up a great story of error upon error, but I think the root cause was a captain asleep or too groggy at the wheel. Given the Airbus's safety record, I think it is ok to say the automation has most likely saved more lives than if we were 'pilot only' at this point.
This post is complete and utter nonsense in so many ways I don't know how to begin.
Relevant: I am a licensed pilot and am very familiar with the NTSB and similar agencies that do post crash analysis. They are among the most thorough and most scientific and objective inquiries that exist in the modern world.
You're making really serious allegations against actual people that are wholly unsupported.
Please point out where I deviated from the facts as pointed out in the article from Vanity Fair cited in this sub-thread, and to which I was replying to @dragontamer above. You are making allegations of my post being 'complete and utter nonsense', or that they are 'wholly unsupported'.
I am not sure what the NTSB has to do with the remarks I made. If anything, the head investigator relayed supporting allegations to the Vanity Fair author. Here is a quote from the article [1]:
>>The chief French investigator, Alain Bouillard, later said to me, “If the captain had stayed in position through the Intertropical Convergence Zone, it would have delayed his sleep by no more than 15 minutes, and because of his experience, maybe the story would have ended differently.
>>But, it became known, he had gotten only one hour of sleep the previous night. Rather than resting, he had spent the day touring Rio with his companion.
I accept that you are a licensed pilot. I am a technical diver, with many hours underwater fixing hydraulic and electrical systems, dealing with emergencies like hydraulic fluid leaks, and emergency repairs. I am use to tight situations, however, I am not an underwater accident or incident investigator, and my diving credentials only give me some insight into those types of incidents, that's it.
I have also checked other news accounts, and the off-duty flight attendant allegedly his mistress based on his own emails uncovered by investigators.
More from the Vanity Fair article:
>Marc Dubois, 58, was traveling with an off-duty flight attendant and opera singer. In the French manner, the accident report made no mention of Dubois’s private life, but that omission then required a finding that fatigue played no role, when the captain’s inattention clearly did.
I am not saying the static pitot tubes freezing had nothing to do with it, as obviously this was the kick-off to the problem. I am addressing the article, which engagingly enumerates the steps from how an equipment malfunction escalated to human error, and subsequently the associated behaviors and responses to an airline crash.
I was sincerely interested to see your reply to mine, since I am still not sure why you came down so strong on it, and to perhaps gain your insight on it.
> I think autopilot confidence-mood light should be placed on such vehicles. This way you could predict what is going to happen with the vehicle.
I think this is an excellent idea. Soft blue lighting (or whatever) while it's driving automatically, soft green for manual, angry pulsing red for "human I need an assistance now".
Oh, I didn't mean for the lighting to be either/or (although I can see how my post read like that). I'm thinking more like the aura lighting on Culture drones - so for my example, auto mode might be blue, but tinged with more and more pink/red as it gets more worried (ie. lower confidence), a pulse of white if someone cuts it off, that kind of thing. Angry flashing red would be the equivalent of a human driver having a panic attack and taking their hands off the wheel.
It's not an autonomous vehicle. This mode does operate without a human in the driver's seat, but the human is still expected to observe and intervene if things go wrong. To accommodate the special nature of this, speeds are limited to 1MPH for this particular feature, and the car will move a maximum of 39ft before ending the maneuver.
The problem here is that it's too easy to activate the feature by accident and it's not sufficiently clear when you do so. IMO it needs another confirmation step on the touchscreen after double-clicking the Park button before it goes into the "auto-park after closing the door" mode. It's not a sensor problem; the sensors aren't intended to be foolproof here, they're just a backup.
By default, it does. You have to explicitly change a setting (and I believe get a dire warning about what you're about to do) in order to enable it to move hands-off.
Does this include the "double-park and get out" feature this guy went through? I know it works that way with the key fob, but I'd never heard about this way of activating Summon before, and I'm not sure how a dead man's switch would work in this case.
According to the release notes introducing the feature there's also no way of activating Summon from the key fob or anything other than the mobile app in that firmware version unless you disable dead man's switch operation, which enables the double-press Park feature. Hopefully they've changed it since but it's like Tesla designed it with only two modes - safe but impractical (get out your mobile, unlock it, launch Tesla app, hold down button) and convenient but unsafe (all activation methods enabled, no dead man's switch). See http://electrek.co/2016/02/17/tesla-new-update-autopark-summ... and https://youtu.be/Cg7V0gnW1Us
I don't think using the phone app is particularly impractical. It's usually as convenient to access as the key fob is, and unlocking it and launching the app isn't that hard.
I will admit that the first thing I did when I got the update that defaulted to dead-man's-switch operation was to put it back the way it was. But I was careful to understand the implications of what I was doing, at least.
> I don't think using the phone app is particularly impractical.
In Australia, we don't have the option of disabling the dead man's switch, so we're forced to use the phone app to control Summon.
There are no words in the English language that can begin to describe how frustratingly unreliable it is. Most of the time it simply doesn't work (will say something like "failed to communicate with car"), and even when it does work it tends to lose connection half way and abort. I don't think I've ever managed to complete a Summon without it aborting. This is exacerbated by the fact that the places where I actually need to use Summon tend to be places with poor cell phone reception, e.g. underground car parks.
I can't wait for the key fob-controlled Summon to be enabled in Australia. Summon in its current state is basically useless.
Good points, I wasn't thinking of the communications problems.
It's too bad the phone app can't communicate directly with the car using local radio, like Bluetooth or peer-to-peer WiFi. That would solve this problem and others besides.
Agreed. Unfortunately it seems Tesla requires all control commands for the car to come from their servers through their VPN for security reasons, so the phone can't control the car directly.
It has ultrasonic sensors in the bumpers. They're too far down to detect a trailer at windshield level, but they'll detect the legs of a person standing in front of it.
>the human is still expected to observe and intervene if things go wrong. To accommodate the special nature of this, speeds are limited to 1MPH for this particular feature, and the car will move a maximum of 39ft before ending the maneuver
That's what it does? And it can crash into things? How was this hailed as ground-breaking technology? When it was announced it was on the front page of every technology site.
Summon isn't autopilot. It is a pretty lame feature IMHO that I don't think I will ever use as I don't have a garage too small for my car and I avoid parking in similarly tiny parking spots. I know a lot of Tesla S/X owners and none have ever used it.
This is a basic UX failure. If you press the "Park" button one twice instead of once, the Autopark dialog appears asking to select forward or backward parking. But it's not clearly communicated is that if you don't select either option, forward is automatically selected and Autopark turned on. This is in contrast with all previous autopark features which required manual confirmation on the touch screen.
It's too easy for a driver to be momentarily distracted. Tesla should require the driver to opt in to self parking on the touch screen, rather than the current behavior which requires them to opt out. This seems only prudent for a feature that makes the vehicle suddenly move on its own.
Here's a video of the recently updated UI, showing how easy it is to [accidentally] activate Autopark by pressing the park button a second time. Note that the driver never has to touch the main screen to confirm.
So when Tesla says the driver activated it by "a double-press of the gear selector stalk button, shifting from Drive to Park and requesting Summon activation," they don't mean that a series of three actions are required to activate Autopark (as some articles seem to think[1]). Tesla means that all of those actions are activated just by double-tapping the park button with no confirmation. That's some dangerous UX design if I've ever heard of it.
Holy cow. That video demonstrates exactly what must have happened. You just double tap the park button, exit the vehicle, and without any further interaction with the key fob the car starts moving forward!
That's insane.
What's worse is Tesla must know this is the likely scenario. Shame on them for blaming the user in an attempt to cover their ass.
The sensors really aren't ready for it. For example, when I tested it parallel parked to a curb, the Model S decided to turn the wheels on its own and ended up scraping the rear wheel because it ran into the curb instead of going straight as it was originally aimed.
The current sensors and software are absolutely not ready for true self driving, that much is clear to me after driving the S for 6 months.
10+ year ago for Grand Challenge it cost some money (and good luck getting decent resolution stereo with decent FPS from a pair of 1M sensors, so most relied on lidar - $3K and you have minimally decent 3D of the scene ahead). Today the tens-megapixel sensors cost like nothing, along with CPU power to process it. One can have reasonable infrared too. Ultrasound sensors - cost nothing. Short distance lidar cost close to nothing too. Millimeter radar still probably cost a bit just because no mass production. When i see Google cars - Lexus SUV - they have at least minimally reasonable set of sensors. Nobody else comes even close. I don't understand why.
I'm not sure if the biggest challenge is sensors or software. I don't know what Tesla is running, but I have a strong feeling that software in the large is not up to the task of autonomous driving. Most software (including automotive) has only very limited realtime behavior due to memory allocations, OS preemption, interrupts, ..., error-prone programming languages like are used and resource (memory) usage is often unbounded. I can't imagine that Tesla or anybody else that produces self-driving features at the moment is using something like Ada Ravenscar or advanced static validation techniques through all components that are involved in the self-driving features - and which are often quite complicated (image recognition, etc.) and therefore hard to run in such an environment.
Totally agree. I'm really sad to see Tesla jump the gun on this one and claim they have "autopilot". It seems similar to the debate around landing New Shepard / Falcon 9, except here the false claims and half-baked implementation could set self-driving cars back by another 5 years.
Are you worried about them setting self-driving back by 5 years versus a base case where Tesla didn't exist, or setting it back 5 years versus the 20 or so years of advancing and popularizing the possibilities that they've done?
Someone in the comment of the article posted the image of the trailer, it was pretty tall and the Tesla probably doesn't have sensors for this height:
http://img.ksl.com/slc/2590/259060/25906051.JPG
The sensors are inadequate for this particular situation. The problem for the car was that the obstacle was about five feet off the ground. The parking sensors aren't adequate to deal with something floating in the air like that, so the car was oblivious to the fact that anything was there. There is a camera that could have seen it, but Summon apparently doesn't use the cameras, only the sensors.
But with regards to how "humans need seconds, not milliseconds, to react to complex unexpected events": That actually doesn't seem to be a problem in this situation. This human evidently had seconds to deal with it. What appears to have happened here is that the guy somehow accidentally activated the feature, ignored the alert that came up, got out of the car and stood there as the car very slowly edged toward the trailer.
So Summon mode, which Tesla itself advertises as, amongst other things, a way by which your car can put itself in the garage (https://www.teslamotors.com/blog/summon-your-tesla-your-phon...), can't see objects that are not 'on the ground' but in the air, like, say, a Garage Door...
Especially since the PR reads "It will open the garage door and come to greet you." That implies sufficient sensing to detect garage door clearance. Going into a garage automatically, without sensing overhead or projecting objects, is going to cause problems. Especially since the user, who's probably behind the car at that point, is in the wrong place to see obstacles.
(Oh, but it's "beta". No, Tesla, that doesn't excuse you.)
It is the responsibility of the driver to check to make sure the car has a clear path. In other words, you get out of the car, check to make sure everything is clear, then engage the feature.
The sensors are only there to tell the car when to stop (as in, to sense the back of your garage).
What if the guy activated regular vanilla cruise control and rammed into the truck. Would everybody be going off on Ford/GM/etc.? I doubt it.
That's a little different. Tesla advertises the feature as you being able to walk away while the car parks itself (and in some promos, even opens and closes garage doors by itself).
That's more than a little bit different from "careless on cruise control" (which, point of fact, most (new) cruise control systems will prevent due to those same sensors).
Not when you realize that one of the use cases from Tesla themselves is that the car can open the door and close it by itself. How does it know when it's safe or that there is clearance, moving or not?
You are correct. Summon can be improved by using the camera as well as other sensors. I expect we'll see that in the future.
The other problem is a UI issue, from my perspective. Summon will automatically move forward after two presses of the stalk (one press is park). With being so easy to confuse with a park command, I think it would be imperative to have the user select "forward" or "backward" for summon to start, instead of assuming "forward". Otherwise perhaps better indication that the summon feature has been initiated?
This is what's called Artificial Stupidity. AI will never be achieved because fundamentally, a computer only does what it's told. There will be plenty of AS in the near future, though, due to misapplication of technology and wishful thinking.
That said, sure it should be able to stop on it's own, but I think they couldn't have been more clear that this is beta and the driver is still always responsible.
In my view the driver is just as liable if the put on cruise control and don't pay attention. Is it the manufacturer's fault the car slammed into a vehicle in front of them while cruise control was on? No, I think any reasonable person will be saying it's the driver's fault.
It's funny, I honestly had an entirely opposite reaction to that letter. It seems like a reasonable thing for the software developers to look at to confirm the systems were working as specified, but leaves me with a ton of questions about how this feature was designed overall.
It sounds like the only two mistakes that the guy made were to make a double press of the park button (which, as far as mistakes go, isn't the most unreasonable thing to do), and to assume that a car told to be in park would, well, be in park. He ignored warnings, yes, but he was likely worried about getting out of the car and doing other things by that point, which isn't wildly unreasonable either.
Summon mode is not turned on by default. That's the damning thing here. He turned on the ability to use Summon mode by hand, purposefully. At that point he should know the responsibility that comes with that. That's directly akin to manually disabling traction control on a normal car. At that point, you can't blame the manufacturer for losing control. You held the button down for three seconds and it dinged and the dashboard light came on and you KNEW that it would disable traction control.
He enabled Summon mode through the menu, then either accidentally or on purpose triggered Summon mode, then stepped out of the car and the car drove off. He shouldn't have assumed the car was in park when he knew that hitting the park button twice would activate this beta software that he had to manually enable in the first place.
He used Summon mode on purpose and didn't pay attention to its limitations or the rules saying "only use this on private property". The only blame Tesla has is selling a car to an irresponsible driver.
I'm pretty sure he had to turn on two separate settings, both with their own warnings. There's one setting to enable Summon, and then there's another setting to disable the requirement for "continuous press," since by default Summon only operates in a dead-man's-switch mode with the driver's finger on a button in the phone app.
I do think that the double-click Park feature could use an extra confirmation step. It already pops up a window asking whether you want to go forward or backward, and all they need to do is make it so you have to actively select one, not default it to forward as they currently do.
Being in the stage machinery business, I have never relied on something like a wireless smartphone for a deadman switch. You need certified hardware/sofware meeting relevant standards.
This is why I still use industrial PLCs for my installations over my Pic chip or Arduino creations. I will use them in non-safety related, temporary installations, but unless I have had the HW/SW third-party inspected, I'll stay with the certified combination. An industrial e-stop relay is more than just certified compared to a 5V relay you typically use with an Arduino or Pic chip project to control motors, or other actuators.
There are protocols, and there can be rules, such if WiFi signal is lost, e-stop, but I have personally tested such a system where the hydraulic lift continued to run when WiFi was lost even with a rule to prevent it. Good thing I was standing purposefully near a hard-wired e-stop.
> Good thing I was standing purposefully near a hard-wired e-stop.
Good work! I always have a hand on the e-stop when testing something that could kill someone. And I mean a hard wired e-stop system using a properly rated safety relay, too... never trust software (even on an industrial PLC).
I was testing without people involved, so only machinery would have been damaged. I am very skeptical of any wireless safety systems. I know they exist and are used.
It's the same reason I usually put in some kind of mechanical stop in the event of an errant bit-flip in a running program or piece of hardware. I put in steel flag that when struck turned 90 degrees locking the other piece of machinery, so it could not move until the other device returned. This was only as a redundancy to the software, and I slept better at night for it. Equipment ran for almost 10 years, 24 times a day, 355 days a year without incident.
If it's programmed defensively, it could be reasonably safe. For example, I would want the car to be performing an ongoing, end-to-end verification of the finger's continued presence on the phone.
This could mean, for example, the app could heartbeat the finger's presence multiple times per second. The car would be continuously checking, such that if 500 or 1000 msec had passed since the last end-to-end verification, the car stops.
You could even reduce the risk of API/digitizer errors and require the user to continuously tap/stroke/rub/swirl the deadman switch button, or perform some device movement captured by the accelerometer.
In the safety engineering/mechanical engineering business this why you perform an FMEA: a Failure Mode and Effects Analysis. You list all that could possibly go wrong, how it could go and how it could go wrong. You assign a rating for the likelihood of it being detected, the severity if it does fail, and the frequency of occurrence or likely occurrence. You address each failure mode in order, based upon the product of the above three factors (detectability, severity, occurrence), with a mitigation strategy, only if you cannot entirely remove, or design away the risk.
You cannot (well you can) install automation controls on any old laptop or notebook. Good luck trying to show your smartphone to the insurance investigator!Our maintenance tablet had to be ruggedized to mil-spec, and had to have a rated e-stop button on it per BSI standards. We never used wireless control without being near a 'hard' e-stop, and always just for maintenance mode, no real runs with people around.
If you can get frustrated when your finger's sweaty with your touchscreen, imagine a 15 metric ton lift continuing to move at 100mm/s, because the touch screen on your smartphone still thinks your finger is holding the 'deadman' icon! Not to mention WiFi dropping, or your battery going dead.
I can only say this, since I have seen some hairy situations in my day.
Interesting reading the follow up. This really really makes you wonder about the motives of the (non)driver. I agree with the various safety folks that the existing anti-collision features should take precedence over the summon feature so if nothing else I hope someone is back there re-ordering their subsumption behaviors to effect that.
Tesla has a big target painted on its back moving as fast as it is, and people will take advantage of that. This smells of that sort of thing but one can never know without being there. It looks pretty clear this person isn't going to get any sympathy from Tesla.
>> I agree with the various safety folks that the existing anti-collision features should take precedence over the summon feature
Well actually this is like executing something as a sudo user. The question of safety precedence doesn't arise because you have explicitly asked for it to be disabled. Now complaining that safety features should have still taken precedence is naive, its actually more like trying to dump the blame on somebody else's for what is very clearly your mistake.
I'm reminded of the IT security talk (Ugly bags of water), where the first dozen or so slides were about how the automotive industry had to make a lot of changes to account for human's error proneness.
I guess now that Software is sneaking back into automobiles, we're going to shift back into "blame the user" mode?
"But he used the feature wrong!" - To which I say, why was he able to use the feature the wrong way in the first place? Why are there so many limitations (won't sense a bike; won't sense a partially opened garage door) on a feature designed and advertised as "hands off summoning of the vehicle"?
I think the liability shifts a little bit because when you're in cruise control you're literally behind the steering wheel. In this case you're outside the vehicle.
I think as more autonomous features get developed it's going to be complicated for some time regarding who is culpable for a given accident.
Even outside the vehicle you're still in control. By default, Summon can only be used with the mobile app and with a 'dead man switch': lift your finger off the button in the app and the car stops. The driver had to specifically disable that protection to use the feature the way he did, and now claims no responsibility. Also, pressing any button on the key stops the car. Seems like a whole host of bad decisions.
> I can't press a button to disable the brakes on my car, for instance.
The brake is not a safety feature that you expect the car to activate automatically on your behalf. The driver has to explicitly step on the brake pedal to activate it.
With regards to Summon, disabling its safety limitations is like disabling traction control, stability control, automatic emergency braking, lane keeping/lane departure warning, etc., all of which can be disabled by pressing a button somewhere.
>>It shouldn't be possible to disable safety features.
Here really it needs to be defined what qualifies as safety. The act of driving(in certain circumstances) in itself is unsafe by many a definition, by relinquishing control to the driver you open all the risks that likely to occur. Its hard to draw the boundary as to what is safe and what isn't. Please tell me should cars carry mandatory breath analyzers? and disable ignition in case the analyzer finds traces of alcohol in the breath? Should there be a detector to check if the driver at the wheel has been well rested a night before? A detector to check if the driver isn't in rage? These scenarios only increase exponentially, but remember when you relinquish control to the driver you now open door for all risks equally. Therefore you can't handle all the thousands of situations, instead what you do is remind the user of the responsibility, double check the user decision and then relinquish the control to the user.
Therefore in all seriousness, if you take control of the car regardless of the situation you are really responsible. Even if you control the car through your phone.
There's a qualitative difference between allowing disabling of a safety feature (what I'm talking about) and adding additional ones (every example you just mentioned). In that light, do you have any response to my statement that it shouldn't be possible to disable safety features? Note that requiring the addition of other safety features is a separate issue.
My personal opinion is that it's your property and you're responsible for it no matter what. To me, it's not really any different than a person's dog biting another person or a tree falling on his neighbor's house. It's likely the owner did not intent for these events to happen, but they did and the owner should be held liable for it.
> My personal opinion is that it's your property and you're responsible for it no matter what.
I'm sure you don't mean that as absolutely as you're making it sound. For example, if I steal your car and run over somebody with it, surely that's my responsibility, not yours. Right?
It sounds like Tesla are technically on the right side of their own usage instructions... but those instructions /stink/.
It's like the user manual for a microwave oven saying it "must never be used unattended" and "the user is responsible for shutting off the oven if it fails to stop when the timer reaches zero".
(Whoops, I meant to post this last night but apparently forgot.)
Stoves don't advertise any degree of autonomy. You turn the tap, fire comes out. You put things in the fire, they get hot and/or they burn. It doesn't deliver any safeguards but it doesn't promise any.
The other end of the scale (if we're talking about gas-burning apparatus) would be your instant gas hot water system. It's got a series of belt-and-suspenders safety devices in it to stop it from overheating, exploding, or gassing you out if the pilot light goes out. It absolutely does advertise that it's an automated system which will do its thing without your intervention - and if it fails to do its thing, the results can be disastrous. So it's designed to be safe.
The problem lies when you have devices in the middle of the scale, with devices that are designed to have some degree of autonomy but lack the commitment to see it through. Take my drip coffee maker, for instance - the manual firmly admonishes you to "never run the machine unattended" on one page and then on the next, explains how the easy-to-use timer system allows you to have fresh coffee waiting for you when you wake up. The intent is clearly that the system be used without supervision, but instead of engineering in the appropriate protections (and tripling the price) the manufacturer just put some ass-covering lines in the manual in the hope that if there's a malfunction and a house burns down, that protects them from a lawsuit.
Eeeh I dunno, that's more like someone trying to take their P90D up a 4WD track because it has four driven wheels. Now, if it had a "low range" setting on its gear selector, then that would be setting up an expectation that it'd handle it.
If Tesla's response to this is actually what the article says, then that's somewhat worrying. It's never a good idea to blame the user for a failing of the product like this, especially on something like a car, beta version or not. If the car can't reliably not collide with obstacles in Summon mode, then the mode shouldn't be available to the public yet.
This also points out a failing with Tesla's "we don't need LIDAR" strategy for sensors. Ultrasonic/IR sensors around the body might be reasonable for most driving situations, but clearly there are going to be incidents like this one if the car can't see at the full height of the body at close distance.
Honestly it's a design issue. The way that summon was activated(double tap on P) is very possible to slip and do. The screen where you cancel I've occasionally seen take 1-3s to pop up depending on what the rest of the SoC is doing.
I could totally see a scenario where this happened, screen popped up while he was exiting and wasn't able to hear/notice that summon was engaged.
The better fix here is to have a CONFIRM on the touchscreen rather than a CANCEL. It wouldn't hinder the experience since you already select forwards/back and catches this accident case.
For the record, love the car and almost everything that Tesla does but I really hope they revisit this and design it a bit more defensively.
"Unfortunately, these warnings were not heeded in this incident. The vehicle logs confirm that the automatic Summon feature was initiated by a double-press of the gear selector stalk button, shifting from Drive to Park and requesting Summon activation. The driver was alerted of the Summon activation with an audible chime and a pop-up message on the center touchscreen display. At this time, the driver had the opportunity to cancel the action by pressing CANCEL on the center touchscreen display; however, the CANCEL button was not clicked by the driver. In the next second, the brake pedal was released and two seconds later, the driver exited the vehicle. Three seconds after that, the driver's door was closed, and another three seconds later, Summon activated pursuant to the driver's double-press activation request. Approximately five minutes, sixteen seconds after Summon activated, the vehicle's driver's-side front door was opened again. The vehicle's behavior was the result of the driver's own actions and as you were informed through multiple sources regarding the Summon feature, the driver is always responsible for the safe operation and for maintaining proper control of the vehicle."
Basically, they designed an autonomous-operation mode that was easy to activate by accident and incapable of reliably avoiding crashing into things, it appears someone did and his shiny Tesla crashed into a trailer as a result, and they responded by accusing him of intentionally activating the feature and misusing it.
Why is this being down voted? The highly detailed log of the driver's every action is crazy creepy.
I get that the data is likely useful for debugging, and may very well be a function of the feature's beta status (can someone confirm? Or is this something that Teslas do all the time?), but it's still insanely creepy that every single action this guy took in his own car was remotely logged and accessible. This guy is basically driving a Telescreen from 1984 to work.
Looks like there's someone doing a carpet-downvoting everything in the subthread(my root post dropped ~3 pts just as these were downvoted).
Yeah, it's a double-edged sword. On one hand it's a ton of data, on the other there's multiple cases where you don't need to bring the car into the dealer for them to diagnose something.
Oh wow. I have been anti-Tesla due to their creepy "we still own your car" auto-update craziness but that just takes it up another level. No, Tesla, I will not buy your cars, not now not ever, because you don't trust me and therefore I do not trust you.
But that seems to mostly be speed and throttle information stored in a black box in the car that logs in the event of an accident and isn't remotely accessible. That sort of thing is a far cry from "our server logs show you opened the driver side door at 5:53 PM" like Tesla is doing. If other manufacturers are recording that sort of granular data, too, then yikes.
I don't think the car's logs are automatically transmitted to Tesla. They reside on the car, and Tesla can login remotely to view them if they have a valid reason to.
Who decides if it's a valid reason, and who authorizes such access?
If it's not the owner... then they aren't really the owner.
With the number of cameras/sensors on a Tesla, it's a privacy nightmare... I won't buy one until the answers to these questions are the ones that I want them to be.
What I also found irritating in this article was the sentence "you remain prepared to stop the vehicle at any time using your key fob or mobile app or by pressing any door handle". Stopping the car by mobile app?!? Imho stopping the car in such situations is something safety critical for me, which absolutely requires hard realtime behavior. I don't see any chance to achieve this through any consumer hardware/software or a bluetooth/wifi connection.
Unfortunately, these warnings were not heeded in this incident.
The way cars chime at people, so often, for such bs reasons, it's a wonder anyone would design a UX where it's safety-critical for someone to pay attention to a chime and pop-up. That UX designer needs to have a good talking to, or be fired.
Heh. In contrast to most GPS systems, my Audi doesn't outright block the ability to enter destinations whilst en route, but instead says "don't do this while driving", and proceeds to let you do it if you so choose.
I agree with you in this case. A Tesla is a mass market item. It should be held to very high safetly standards.
Just ask the designers of that Chernobyl plant.
I somewhat disagree here. It was 1960's technology. Even now, complex systems like that inevitably have a plethora of ways that humans can screw them up. It's very hard to completely prevent determined idiots from destroying the equipment.
1-3s? That could really lead to serious problems for such features. Normally such stuff must be guaranteed to be displayed in less than 200ms or something around that.
I'm currently wondering if this a safety relevant feature (according to ASIL/ISO26262) and whether it would be even allowed to run such a feature on a component that is not designed for safety related environments which require realtime behavior (a QT UI running on Linux certainly doesn't provide that, and even lots of other automotive software stacks including Autosar give only limited guarantees).
Yeah, I did a bit more testing and I could actually get ~5s if I kicked off a navigation right before summon(I've also seen similar slowdowns on canceled nav).
The hazards do flash at the same time but that also happens when you lock the car.
You do need to opt-in but I'm hoping they make a change to have it a bit more defensive. In my opinion the right thing to do here is to admit it is possible to accidentally kick off and remedy it(much like with the battery shield).
>The hazards do flash at the same time but that also happens when you lock the car.
When activated from the stalk, the hazards lights don't flash continuously like the key fob or mobile app. They only flash once, because the car immediately and "automagically" selects the forward direction, transitioning away from the flashing state. And the only auditory indication when the driver is still in the car is a single chime.
>You do need to opt-in but I'm hoping they make a change to have it a bit more defensive.
The Autopark dialog is opt-out not opt-in, unlike other Tesla automatic features which require manual confirmation on the touch screen to activate.
A single additional press of the park button brings up the Summon dialog, with arrows to move the car forward and backward. The flaw is that forward is the default. You don't have to press it. The default should be "do nothing," making the driver confirm their intent to Autopark.
They have a good point that if the trailer is at windshield level then the ultrasonic/radar system can't detect it, but any vision systems should be able to (like the one I imagine is used to find lane markings).
I agree completely with the Verge: this should never happen. 'Beta' is not an excuse for this kind of thing, ever.
A valid excuse? The mode was activated and a small land slide caused the car to slip down the side of a hill. THAT'S a valid excuse. 'We told you to be careful' isn't.
Yeah, we live in a web and app world where most software guys are used to having a great deal of latitude in these types of things. It's much different when you're dealing with multi-ton death machines like cars and planes.
I understand Tesla is clarifying that the driver misused the feature and that this is not normal operation, which is fine as far as it goes, but they simply shouldn't allow this to happen. If your product is not resilient against human error (or even a reasonable degree of human malice), it's not production-ready.
Tesla actually takes a great attitude on this with regard to vehicle crash safety. They take minimizing fatalities super seriously. The kinds of collisions that all other automakers would've written off as "Well man, we can't stop people from driving into poles at 80 mph", Tesla notes and does everything they can to make sure the occupants can walk away.
That's the kind of attitude we need here -- if your machine allows something bad to happen, you should not blame the user, but take every reasonable measure to correct the problem. This is the attitude that allowed the Model S to break the crash safety scale. You can't be perfect, but you can be pretty good. Saying "Well, don't press that button if there's a trailer in front of your car" isn't good enough.
"Tesla actually takes a great attitude on this with regard to vehicle crash safety. They take minimizing fatalities super seriously. The kinds of collisions that all other automakers would've written off as "Well man, we can't stop people from driving into poles at 80 mph", Tesla notes and does everything they can to make sure the occupants can walk away."
I'm actually not sure if the camera in the rearview mirror on the Model S is stereoscopic or not - maybe someone on HN can confirm. I know it's used for reading speed limits and helping with lane keeping, but it can be surprisingly difficult to get accurate distance information to objects from a single lens if that's all it has.
I feel that pressing "park" should be idempotent. If I press "park" twice in my car, I don't want to drive away once I get out. Tesla really needs a dedicated "start autopilot" button to make the intention to use the feature explicit.
Yes, apparently the fact that this feature was activated was messaged on the instrument cluster, but that shouldn't be sufficient to absolve Tesla from the liability of this poor UI decision.
Especially when considering, as mentioned upstream, Tesla's UI can have significant latency issues. "Several seconds" to display a confirmation (or actually a "Cancel") easily means the difference between a catching of your breath and several thousand dollars damage or worse.
It does sound like there could be an improvement to the interface here but to be fair it is a parking mode and ran into a problem due to the specific environment it was started from. In a normal parking situation it would have understood its environment and not had an accident. There are many special cases that are being learned from every day by having some autonomous features in use by the general public.
> Yes, apparently the fact that this feature was activated was messaged on the instrument cluster, but that shouldn't be sufficient to absolve Tesla from the liability of this poor UI decision.
We're talking about a company which has installed a flat glass control panel in their cars — they clearly don't care about UI/UX.
Is it just me or if you can't approximate obstacles via a sensor within the complete bounding box of the car -- except for perhaps the top and bottom, can you really even have this feature work reliably?
From a technical perspective, you just don't have all the data necessary, and therefore any solutions will be guesses, hacks and "best efforts", and cannot be improved on via any manor of software update. This voids the "beta" claim made by the company, as no software update could remedy the situation.
Tesla has got to know this, and I think its negligent for them to release a feature (even in "beta") when they know there are hard technical limitations (sensors, not software) that prohibit it from working properly. It puts property and people's lives at risk unnecessarily.
At the minimum, Tesla cars equipped or enabled with these features represent a higher risk to the public, and the owners of these vehicles should be required to carry high risk insurance.
> In a statement to KSL, Tesla says that Summon "may not detect certain obstacles, including those that are very narrow (e.g., bikes), lower than the fascia
May as well rewrite that to read "May run over bikers or children". If you can't implement a feature properly, then don't implement it at all. If that means current Teslas can't do it because they lack the proper sensors, then they shouldn't do it.
There is a grave danger that Tesla's precocious push of autonomous features could result in a PR disaster for self-driving technology if it actually ends up killing someone.
We shouldn't have this in the wild until we're sure it's ready.
I read a really interesting book lately called "Empires of Light" about the early days of electrically. Basically, people got electrocuted all the freaking time before we really figured out how to wire things safely. At one point there was a huge tangle of telegraph and power wires haphazardly strewn together all over New York city. People would abandon old wires in place and just run new ones on top of them.
So, there's going to be some deaths. Without a doubt, before autonomous cars are fully integrated into society, there will be some deaths that would not have happened with a human driver. That's always the cost of human progress.
Of course we should do everything we can to minimize it as much as possible, but there's no way to guarantee a new technology will be 100% perfect on the first try, or the second try, or the 50th try. What scares me is that one of these deaths will happen and the public outcry will kill the whole endeavor before it ever gets off the ground. We shouldn't let that happen.
Aside from the obvious "What about (security) flaws in software of (semi-autonomous) cars", I'm especially thinking about scenarios where some sort of sensor jammer is used to blind/misguide the vehicle (laser pointers blinding pilots are already a thing, so clearly there's people willing to try it out). I have the feeling we'll hear about that in the future.
Someone is going to have to die sooner or later if this technology gets into production. In 100 years I bet people will still die due to software bugs -- but hopefully very few. The important thing is if the feature has a net reduction in total deaths, and I believe that can be said of the Autopilot features that ship today.
I think if autonomous technology ends up killing somebody, PR is the last angle we should worry about. Let's first worry about pushing a technology that, y'know, kills people.
That line of thinking is erroneous in my opinion. Autonomous technology only needs to kill a few less people than the existing manual technology to be worth debating, and it's a no brainer if it kills orders of magnitude less people.
About 1 in a million people die annually by car accident per VMT (vehicle-miles travelled) [0] and the number's declining. So if Tesla sells a thousand cars, each travelling a thousand miles that kill 2 people in the first year, they're already above the mark. If there was a person between the truck and Tesla they'd be way above the mark. Even more so when you consider "Summon Mode" isn't full self-driving and is probably only being used less than .1% the time the car is turned on.
More problems are caused by taking things too fast than taking it too slow. Full self driving cars are probably farther off than you guys seem to think.
There are somewhere around 100,000 Tesla cars on the road (as of end of 2015). If each is driven say 10,000 miles that is a total of 1000 Million miles driven. At the current US average of over 1 fatality per 100 Millon miles driven that would make 10 fatalities per year expected from Tesla drivers.
Not all Tesla cars are going to be driven in self-driving mode at first, so we can expect the numbers to look much worse early on. If only 20 percent of Tesla drivers let the cars self-drive (and of course only the newer Teslas will be capable of self driving because of sensors, etc.) we are down to 2 deaths expected by human drivers in that 20 percent.
I used to think that these kinds of numbers would act as a barrier to the development of self-driving cars, but each time one car has an accident all of the cars will learn how to avoid it the next time. Every human driver has to learn what to do around icy roads, what to do when cut off by a car in a neighboring lane, what to do when approaching a neighborhood where kids and dogs are playing ball in the front yards, but a self-driving car only has to learn once and all the other self-driving cars will know what to do too.
When I was growing up, there were around 6 or 7 times as many people killed per vehicle-miles traveled. I hope that self-driving cars won't be as dangerous as the cars of the 50's and 60's when they first hit the road. In the longer run as thousands and eventually millions of self-driving cars begin driving I expect them to improve rapidly though their shared experiences.
That is dependent on them actually being software-fixable problems.
In this case, the sensors do not actually monitor the complete space taken up by the vehicle, so this kind of accident would be impossible to prevent by modifying software.
The public may not react rationally to deaths caused by autonomous vehicles. If the technology kills people at a lower rate than the existing technology (manual control), then pushing it seems appropriate. Worrying about good PR could save lives.
>Let's first worry about pushing a technology that, y'know, kills people.
I agree. We need to get all car ads off TV and quit pushing for people to own automobiles. Pushing this technology into the hands of as many unqualified people as possible is a recipe for death and disaster, killing tens of thousands of people each year.
I'll push for any technology which will kill fewer people.
How hard is it to disable the “dead man's switch” for this feature? Can it be done without searching the forum for hours? Is it documented in the owner's manual?
The direction of my blame here kind of depends on the answer to those questions. Of course, it's technically the owner's fault, but a feature like this really needs to be 100% idiot proof.
This is a new feature to many people, and it's exactly the type of feature that people are going to “test” outside of the ideal operating conditions. It's not Tesla's responsibility to account for every stupid decision of its customers, but Tesla should have at least done everything in their power to ensure that critical safety features couldn't be disabled (which they may have done; I don't know).
Most critical safety features on cars can't be trivially disabled (ABS, airbags, automatic seatbelt locks, etc...). The only safety feature that I can think of that can be trivially disabled is traction/stability control, but there's a real reason for this (getting out of deep snow/mud). Also, disabling traction/stability control is a multi-stage process on many cars. On late model BMWs at least, pressing the “DTC” button once will partially reduce traction/stability control, but not completely disable it. To the average person, it appears to be completely disabled. However, if you do a little research, you'll find that if you hold it down for another 5 seconds, it disables completely (sort of). Even with it completely disabled, certain aspects of the system remain on. The only way to completely disable those portions would be to flash custom software to the car (which is well beyond the ability of the average person).
Single toggle in the normal Summon settings screen with a help message about the great convenience features it enables, apparently: https://youtu.be/Cg7V0gnW1Us It's like Tesla want people to disable it. (Their original version didn't even have a dead man's switch; they added one after Consumer Reports raised concern about its safety.)
I see a lot of comments about safety here. Note that Tesla's Summon mode limits the car to 1MPH and 39ft of movement. It also is very sensitive to resistance, to the point that I actually had to construct ramps for my car to climb the 1-inch lip at the entrance to my garage, otherwise it would stop at that point and refuse to go further. Using this feature to kill somebody is going to take a lot of effort. There are many interesting discussions to be had here about software, UX, corporate responsibility in the face of user error with bad UX, etc., but I don't think there's much room to discuss safety here. This is a risk to property, not life.
> It also is very sensitive to resistance, to the point that I actually had to construct ramps for my car to climb the 1-inch lip at the entrance to my garage, otherwise it would stop at that point and refuse to go further.
The pictures of the car in a article linked elsewhere in this discussion (https://www.ksl.com/?sid=39727592&nid=148&title=utah-man-say...) show that the windshield was smashed. I'm finding it challenging to reconcile your personal example with the images of the smashed windshield; in my own experience, it takes some effort to break laminated safety glass. I would think (perhaps wrongly) that it would take more effort to break safety glass than would be stopped by a 1in. step.
> but I don't think there's much room to discuss safety here.
I couldn't disagree more. I don't understand why it should be possible to disable any safety interlock (at least, that's how I'm interpreting the feature description) in a consumer product, especially persistently.
> I'm finding it challenging to reconcile your personal example with the images of the smashed windshield
Really? It seems fairly obvious to me. The sensors are in the front of the car, lower down. So anything on the ground in front of the car registers as an obstacle and the car stops. Such as a small step.
The front of this trailer (they keep referring to it as the back, but it's clearly the front) was too high off the ground to register as an obstacle. You'll note that the front of the car has plenty room - there's nothing blocking it - but the windscreen doesn't!
So it needs to be fixed; you could imagine a Tesla running into, say, a small truck with timber sticking out the back when the speed was low enough that the safe distance between vehicles was small. But it hardly seems life-threatening in any way.
The lip isn't detected by the sensors. The car stops on it when the tires hit it because of the physical obstacle it presents and the extra force needed to climb it.
It's weird, usually the car actually passed over the lip with the front tires, but then stopped when the rear tires got to it. The threshold for stopping must be very close to what it actually encounters there.
Someone on Youtube had what I thought was a clever solution: a piece of trim (looked like shoe molding) to bridge the 1" step. Cheap, and they claimed 100% effective.
This could easily have been a garage door. And in Tesla's own words, Summon can be used to park in a garage, and handle opening and closing the garage door.
It may not be a sensor failure, but for Tesla's advertised (and this) use case, it definitely is a _design_ failure.
If the car couldn't sense it to avoid it in the first place, what makes you think it's going to stop when it hits your door, versus continuing on plowing through?
From an updated version of the story (http://www.theverge.com/2016/5/11/11658226/tesla-model-s-sum...) it sounds like the driver was standing next to the car when it crashed. Tesla says that Summon mode started operating three seconds after he got out of the car.
But enabling Summon mode is not quiet nor automatic. It's a manual process and it dings and shows a light on the dashboard. Tesla is saying the guy turned on Summon mode in a place where he wasn't supposed to, in a situation it was never designed to work in, and that's why the driver is at fault. The software worked as designed, but the driver didn't pay attention to its limitations.
That's not true. The software is designed to work under human supervision, and the autonomous collision avoidance is a supplement to that supervision, not a substitute.
You could certainly argue that the software should be designed to, above all else, not hit stationary objects. But it is not actually designed that way.
What is your basis for the statement that it was not designed to work under human supervision? The documentation is quite explicit about needing human supervision, and I see nothing whatsoever indicating anything otherwise.
And you're still ignoring the part where I said it doesn't matter if it was designed intentionally this way or not, it shouldn't behave this way, regardless.
Really, you think that "X helps Y" is compatible with X being intended as the sole thing performing Y? If so, we clearly speak fundamentally different languages, despite the superficial similarities in spelling and such.
As for ignoring the other part, I explicitly acknowledged in my original reply that this would be a valid criticism, and it's not something I feel strongly enough about to argue.
You said it's designed to work without human supervision. Unless you're proposing some third entity besides the car and the human which would be responsible for collision avoidance, then what you said is that the car is fully responsible for it.
"It was not designed to work under human supervision...."
I don't know how else to understand that other than that it was designed to work without human supervision. If that's not what you meant, perhaps you could elaborate.
I meant what I said. Collision detection was not designed to work under human supervision, which means when it runs into something, it has failed its design.
> Collision detection was not designed to work under human supervision, which means when it runs into something, it has failed its design.
From your quote upthread:
> "Digital control of motors, brakes, and steering helps avoid collisions from the front and sides, and prevents the car from wandering off the road."
(Emphasis mine.)
Notice the shift in language from "helps avoid" to "prevents" when describing the two different aspects of the car's thrust and positioning systems. The different is significant and important. It's clear that the systems are a front and side collision assistance system, not a front and side collision prevention system.
Is your name Jared Overton? Or is that the name of your client? Because no impartial person would be so deliberately dense to argue against his own words.
I mean, my laptop is designed to detect a fall and park the hard drive automatically to avoid damage. But if I throw it off a cliff, it's probably still going to get damaged, and the Western Digital is going to laugh at my warranty claim. Because the software worked as designed, I just ignored the limitations and took the hardware outside of the design spec. Yes, the hard drive can protect against falls, up to a certain amount of G forces. Yes the Tesla can move itself around stationary objects, within the limitations of where the sensors can read.
I think you need to reevaluate your definition of "as designed". Because it's pretty clear that the Tesla was not designed to avoid this collision. It's not a software failure.
> The Tesla was designed to avoid this collision. Are you trying to say that Tesla would prefer their car hit stuff?
They don't prefer their car to hit stuff, but the car is not designed to never hit stuff either. By your logic, any car manufacturer that doesn't have some sort of collision prevention system prefer their cars to hit stuff, which is absurd.
That's not my logic, it's the logic of the comment I replied to. What I'm trying to say is they specifically designed and built a collision detection system with the goal of preventing the car from hitting things parked in front of it. That collision detection system did not detect the collision that took place in this case, and failed in its design goal, it's raison d'etre.
The car is designed to prevent collisions that can be detected by the sensors. This particular collision could not be detected by the sensors, therefore it isn't possible for the car to prevent it.
Are you basically saying a bulletproof vest has failed to achieve its design goal if it can't stop armour-piercing rounds? Or a fire extinguisher has failed to achieve its design goal if it can't put out a wild forest fire? There are always limitations to everything.
I'm saying Tesla engineers aren't looking at this and throwing their hands up in the air, claiming this was supposed to happen, and I'm saying a bulletproof vest has failed to achieve its design goal if you can't wear it when it's wet.
From the accounts here, it sounds like it's pretty easy to activate Summon when you just want to park your car, by accidentally pressing "park" twice instead of once. This is compounded by the fact that there is no "off" for Teslas, you're expected to park it and get out. Sure, there might be dings and dashboard lights, but I can easily see how someone might not see them, or might not know what they signify is about to happen. Finally, as far as I'm aware, there is no external sound or indication that the car is in Summon mode, it just starts to move, so I can easily see how, once you exit the vehicle, you'd be unaware of it happening behind you.
Activating Summon by double-clicking the Park button is an optional feature that's off by default. In addition, Summon itself is an optional feature that's off by default. Both settings come with warnings about appropriate use. So while the feature is fairly easy to activate once you have the settings enabled, the easy answer to it is to leave it disabled.
Is it possible that those safeguards were deactivated by the Tesla delivery specialist, or by the customer at the suggestion of the delivery specialist, in order to demonstrate how the feature works? I genuinely don't know if that's something the delivery specialists do, but I think it's plausible. Tesla demonstrates Autopilot on test drives as a matter of course, and I could easily see them wanting to demonstrate Summon to new customers.
If so, I think it's possible the customer was not made aware of the safety implications and limitations of the system.
It's possible, in that it's not against the laws of physics or anything. I've never heard of anyone having such things enabled by anyone from Tesla, though, and I suspect that if they did it for this person, he'd be telling everyone who will listen.
"[Tesla] is just assuming that I just sat there and watched it happen and I was okay with that." (in the video)
The article notes: "A worker at the business met him at the side of the road, Overton said, and asked him multiple questions about his car."
Tesla said 3 seconds passed from the door closing to the car moving.
It sounds to me like he was showing off some features to someone. It failed to stop as he expected it to. But he decided to blame the incident on Tesla.
I suspect that the "connect to your car" step wasn't even necessary - that the activity logs are constantly streamed to Tesla's service. And yes, it's worrying to me that they collect information to that level of detail, that the information is tied to your identity, and that they have no problem publicizing that data if you say anything bad about them.
If I buy a Tesla, the first thing I'll do is snip the antenna cable.
Maybe it's just me but I find the fact that Tesla has ready access to such detailed logs to be extremely creepy and pretty much a showstopper for me ever owning a Tesla.
That's not comparable. Toyota used black box data in _court cases_ where they were being accused of negligence, and people were attempting to hold them civilly and financial liable.
This is Tesla managing their PR and saying "if you say anything bad about us, we'll publish to the world your driving history" (or in other cases, as they've already done, disable features and functions in what is supposedly your property, or downgrade the software thereof).
Does anyone know exactly what kind of data Tesla can access and under what conditions? Do they routinely scan "their" whole fleet or just in cases like this?
When the equivalent of the "check engine" light comes on in a Tesla, the car prompts the driver to call the service number, and then the technician says they're downloading the logs from your car. That's not proof of what's actually going on, of course.
Tesla needs to have some sympathy for the user. People "will" accidentally press a button, sometimes more than once. Sometimes you press it because it is there. This happens. The feature needs to be robust in the face of failure and it seems like this did not happen.
Tesla's response to this issue is staggering to me. Summon mode is not a remote control. The car is controlling itself, and I don't think it's reasonable to expect the driver to be responsible for it's actions when that is happening.
If you read the linked Verge article at the end you'll see that Tesla's manual instructs that the user of Summon mode must monitor the vehicle and be prepared to stop it at any time. If this is the case, which according to Tesla's manual is the case, then a simple remote control operation would be better overall. If the user must monitor the car the whole time they mine as well just take control. No need for any smarts from the car itself.
There's no indication that the guy was aware he put it in Summon mode. He double tapped something he was trying to single tap and then didn't respond to a modal popup as he was exiting the vehicle.
This is not a UI interaction that should result in the car driving off by itself. Humans are fallible.
Seriously, imagine an episode of Star Trek where the computer hears the captain asking the engineer a question about the self-destruct sequence, and the computer hears the magic words "self-destruct sequence" and pops up a dialog box on a nearby console saying, "Self-destruct sequence activated. Press CANCEL to cancel." Well guess what, nobody expected the computer to activate the self-destruct sequence just because someone happened to mention it, so no one was looking at the console, so no one canceled it, so the ship blew up.
Way to go, Tesla. Great UI design there. I'll be glad to entrust my life to your software on the highway...
I agree. While Tesla might be accurate in describing the actions that occurred, does it make sense for a driver to intentionally damage their expensive car?
I wonder if the way Tesla is delivering software could be problematic. If the way you use your car can change week to week, how are you supposed to know what works and what doesn't? The user may be used to other modes where the car correctly detects when it will collide with something. I don't find it unreasonable for them to expect it to do the same thing here.
This feature is disabled by default, needs to be explicitly enabled and comes with warnings that it's an immature feature and you need to be very careful with it on. And even once you turn it on, it requires you to press a button on the key fob for it to work, but he went and disabled that safety feature. Tesla didn't slip this in without his notice.
The traditional Silicon Valley development culture that developed under web apps isn't up to the complexity that results when software meets the real world in safety-critical contexts. Heck, as formulated, it wasn't ready to deal with smartphone apps!
You have a feature where the car navigates itself through parking situations, and no hardware or software developer ever paid attention to overhanging obstacles? That wasn't even a concern!? To me, this smacks of the same kind of arrogance and shortsightedness that caused Nest to release thermostats that deactivated without WiFi signal.
It's not clear to me that 'beta' is, or should be, an allowable thing with regards to vehicles.
As far as I'm aware you're not, legally, allowed to put out of spec tyres or breaks on your car, but somehow a beta feature that can autonomously control a vehicle is okay. I'm not convinced.
Personally I'm not at all fond of the idea of trialing beta features. If you ride the bleeding-edge expect to get cut.
OK so the main rule appears to be "the operator must be prepared to stop Summon if an object isn't detected". This is exact problem has been predicted by others, including Google. If you create machines that are really good but still need rare supervision, you're not going to be able to convince people to provide that supervision.
> As such, Summon requires that you continually monitor your vehicle's movement and surroundings while it is in progress and that you remain prepared to stop the vehicle at any time using your key fob or mobile app or by pressing any door handle.
Let's consider the choices. Key fob? Might be in your pocket. App? Has anyone from Tesla tried to use the Tesla app? It frequently takes literally minutes to respond. What if you try to press the door handle and the [expletive removed] door handle sensor doesn't notice? (The latter happens all the time with my car. It usually works when I press very hard on it, which might be a challenging thing to do when the semi-autonomous car is moving.)
This crap makes my glad my Tesla is too old to support their beta autopilot.
Safety is a top priority at Tesla, and we remain committed to ensuring our cars are among the absolute safest vehicles on today's roads. It is paramount that our customers also exercise safe behavior when using our vehicles - including remaining alert and present when using the car's autonomous features, which can significantly improve our customers' overall safety as well as enhance their driving experience.
Summon, when used properly, allows Tesla owners to park in narrow spaces that would otherwise have been very difficult or impossible to access. While Summon is currently in beta, each Tesla owner must agree to the following terms on their touch screen before the feature is enabled:
This feature will park Model S while the driver is outside the vehicle. Please note that the vehicle may not detect certain obstacles, including those that are very narrow (e.g., bikes), lower than the fascia, or hanging from the ceiling. As such, Summon requires that you continually monitor your vehicle's movement and surroundings while it is in progress and that you remain prepared to stop the vehicle at any time using your key fob or mobile app or by pressing any door handle. You must maintain control and responsibility for your vehicle when using this feature and should only use it on private property.
Maybe it's early days yet, but I'm not comfortable with the approach taken by Tesla. Either give the car full control, or else the operator must be in full control with the technology playing an assistive role. I cannot be expected to sit doing nothing behind the wheel for hours and then suddenly be called upon to take over the driving in a split second.
Obviously for the former option Tesla is not there quite yet (and that is an understatement), but I wonder if better sensor tech will not help here. Sensor tech is reasonably robust; thus, for example, even if the autopilot is no longer able to properly make out the markings on the road, a sensor override should be able to determine (using radar?) the locations of nearby vehicles within say a 100 meter radius, thus ensuring collisions are avoided, and give the human driver several second or even a minute to take over.
No wonder this [1] effort from Volvo focuses on, among other this, Radar tech. I think that is a core tech for the success of self-driving cars
To be clear: this was a UX failure on the part of Tesla, not merely a sensor failure. And they are super wrong to blame their faultless user for their screwup.
The UX failure is this happens if you simply double tap the park button and exit the vehicle. That's it! It starts moving forward. No fob interaction, no confirmation, nothing. I'm not making this up, it's insanely bad UI. [1]
A double vs single tap is super easy to do mistakenly. And there we get to the really sh*tty thing: Tesla must know this, and hey are selling out their customer to cover their ass. The whole "the logs prove the user is in he wrong" is wrong and disingenuous. All the logs show is he double tapped the park button.. Probably meant to do a single tap!
Shame on Tesla for disingenuously blaming the user for their design error with such a safety critical feature. I hope they are more careful than this as they move forward. (No pun intended.)
This feature has me wondering what the SAE standards for automatic transmissions have to say on the matter, if anything?
I'm not familiar with them, nor do I have access, but I'm hoping someone here might be helpful on either aspect. Specifically, I'm wondering if the standards for automatic transmission controls recommend against having the Park setting do anything beyond activating the parking pawl and parking brake (when electronically controlled).
I'm not trying to take this to extremes, but my first thought reading this, especially the part about it not necessarily detecting objects close to the ground, is how long will it be before a Tesla in summon mode runs over a kid playing in the driveway? It doesn't seem inconceivable, and you would think it should be literally inconceivable that something like that could happen before releasing the feature. If it was a critical safety feature that could have a deadly side effect - like an airbag - that's a different thing entirely. But a convenience feature more akin to comfort locks should be held to a higher standard of safety.
It's a similar situation to determining what side effects are acceptable in medication. If it cures a deadly disease, serious side effects including chance of death are acceptable. If it's a cure for, say, male pattern baldness, that level of risk would obviously be unacceptable.
It's a limitation of the system, for which they have clear warnings in the user manual as well as next to the button that enables this feature. It's not their fault a user failed to understand the system's limitations.
It's even worse then. I don't see it as a limitation. It should never have made into production at all if that is how it works. I wouldn't have added that feature in.
Yes, but what's worrying is their response. Instead of taking responsibility for the fail-unsafe UI design and their failure to avoid obstacles above a certain height, they are completely blaming the user.
I can't see myself trusting their cruise control or autopilot features. It's like they're saying, "Hey, our cars can drive themselves! But not really. They can maintain speed and lanes and avoid other cars on the highway! But you can't take your hands off the wheel. You can relax and let the car steer for you! But you'd better be ready to take control in a millisecond if it beeps at you. Oh, and if anything bad happens, it's your fault for using it wrong. Thanks for participating in the Tesla Autopilot Early Access Beta Program!"
>We can't be required to be smarter than the software
Jesus - that's the kind of stuff that makes me afraid for the future of humanity. Yeah I get that this may very well have been caused by human stupidity and also that the software that powered the car should have been smarter than to do that, but that quote just gave me the chills.
I read it as meaning "we can't be expected to understand the car's algorithms well enough to be able to predict what the car is going to do and react to the output of its computations in real time."
Which as a general sentiment, abstracted from the particularities of this case, is entirely reasonable.
But in this particular case, the quote is saying: "One can't possibly be expected to understand the two sets of clearly written dire warnings and strident instructions one was required to acknowledge before enabling this feature and using this feature. And one can't then be expected to bear the blame for proceeding to fail to follow the instructions one was given to ensure the safe operation of one's car while using this feature.".
How come the NHTSA doesn't have rules concerning this scenario? Regardless of how it was operated the car should not run into obstacles when no one is behind the wheel and it is guiding itself.
Well given the shape of the obstacle could it be that the model S doesn't have sensors for that? i.e. there's space in front of the car but not high enough.
This is a generic FUD-ism that doesn't seem especially relevant to the topic at hand. If somebody is flying five feet over the ground, a car windshield bumping them at extremely low speed is probably not in their top 10 biggest concerns.
I worry for protruding objects in garages, but I don't see how anyone's bones are in danger here.
Can you explain the actual danger you're imagining rather than trying to be pithy? Because I have no idea how you envision this "breaking through skin." The car isn't even moving fast.
Considering that cars are several ton explosive machines that are among the most lethal devices we interact with regularly, the idea of "running them through the entire gambit" hardly seems like a terrible idea.
That's the "deadly valley" I've written about before - enough automation to almost work, not enough to avoid trouble, and expecting the user to take over when the automation fails. That will not work. Humans need seconds, not milliseconds, to react to complex unexpected events. Google's Urmson, who heads their automatic driving effort, makes this point in talks.
There is absolutely no excuse for an autonomous vehicle hitting a stationary obstacle. If that happened, Tesla's sensors are inadequate and/or their vision system sucks.