Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Model X Crash – Open Letters Between Driver and Tesla (teslarati.com)
37 points by zaroth on Aug 2, 2016 | hide | past | favorite | 72 comments


I editorialized the title a bit to remove the click bait.

This line of Tesla's letter back to the driver caught my eye;

  The diagnostic data shows that the driver door was later
  opened from the outside...
That is pretty impressive that diagnostics are not just logging that a door was open, but which handle was used to open it! I guess Tesla has had enough issues with their retracting outer handles to warrant some additional diagnostics in this area, but those logs sure do come in handy.


In this crash they give details like the door handle opening from the outside. In the Pennsylvania crash they say autopilot turned itself off 11 seconds before the crash.

In the Florida case they say 'who knows?'. How many seconds did the guy in Florida not have his hands on the wheel? How many alarms were sounded? What did the radar detect? What did the sonar say? Nothing.

If Tesla wants to be trusted they need to give out all the relevant information for all crashes, including timestamps, not pick and choose crafted statements that make their PR department happy.


The NTSB is involved in the investigation of the Florida case, but not the others. NTSB investigations are rigorous to the extreme, and move at a pace much, much slower than the media events surrounding the other crashes. Any data that Tesla has regarding the crash will be meticulously picked apart and anything relevant will be published in the comprehensive NTSB report. [1] I imagine there is some force compelling Tesla to be more quiet on that case, whether it is deference to the NTSB, or respect for the fatality. Either way, the data will become known eventually, from quite possibly the most transparent and independent source possible. I look forward to the report.

[1] http://www.ntsb.gov/investigations/accidentreports/pages/acc...


They said that the radar and camera detected the trailer but that it looked like an overhead sign and so was ignored. The sonar sensors play no role in preventing forward collisions, as they only have a range of about 16 feet. They are used to aid parking and to avoid side collisions from cars in adjacent lanes.

Since the system didn't detect any danger, I think it's safe to assume that no alarms sounded. Hands on the wheel isn't relevant, since the problem wasn't one to be solved with steering.

Aside from timestamps (which don't seem particularly useful here), this seems to include just about everything interesting.


> They said that the radar and camera detected the trailer but that it looked like an overhead sign and so was ignored.

Has Tesla or NTSB actually said that? From ref. 1, published today, my emphasis:

    Tesla *is considering* whether the radar and camera input
    for the vehicle’s automatic emergency braking system
    failed to detect the truck trailer or the automatic
    braking system’s radar may have detected the trailer but
    discounted this input…
[1] http://finance.yahoo.com/news/tesla-mulling-two-theories-exp...


Looking into it, Elon Musk said that the radar ignores overhead signs, but he didn't actually say that was what happened in this case:

https://twitter.com/elonmusk/status/748625979271045121?ref_s...

I don't think it matters much. The main thing is that the system didn't detect the trailer. Exactly why it failed to detect the trailer is interesting, but doesn't change anything about fault or responsibility. And if Tesla is still considering the possibility, then apparently it's hard enough to figure out that just releasing logs wouldn't help us.


> The main thing is that the system didn't detect the trailer.

Has that information been made public? Unless it has, that statement would also appear to be speculative at this point.

> Exactly why it failed to detect the trailer is interesting, but doesn't change anything about fault or responsibility.

I would say that it is far too early to make that declaration, given that the investigation is ongoing.


From Tesla's original post about the crash:

"Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky" - https://www.tesla.com/blog/tragic-loss

I have trouble imagining the underlying reasons for this being particularly interesting. I guess if the lighting was so bad that even the driver couldn't see the trailer then that would excuse him from fault, but I don't see how it changes Tesla's role in it.


Short of an unholy ritual involving the supernatural, Tesla PR or Musk can only guess as to what the driver noticed in this case, and neither can accurately speak to what the either the driver or car did or did not do until the investigation is completed.

I think there's a lot to be concerned about wrt. AutoPilot and I plan on thoroughly reading the completed investigation reports.


The driver never braked. It's possible that he noticed the trailer and refrained from braking for some reason, but that would be really weird.

I don't see why they have to wait for an investigation to talk about what the car did, nor about what the driver did with the car. They have all that information. The purpose of the investigation is to contextualize that information and incorporate external information like the speed and position of the truck in the time leading up to the crash.


> The driver never braked. It's possible that he noticed the trailer and refrained from braking for some reason, but that would be really weird.

He could have had a medical complication. He could also have erred in thinking that the trailer would have cleared the path in time, or in thinking the car would have braked for him, or in perhaps his reaction time was insufficient. I have no idea. None of the items on my brainstormed list strike me was weird.

> I don't see why they have to wait for an investigation to talk about what the car did, nor about what the driver did with the car.

Tesla doesn't have to, but the skeptic in me observes that it is all too easy to spread misinformation (that would benefit them) through the technology press before the investigation is completed. I personally don't see an advantage for Tesla to speak at this stage since there are serious product liability issues in play, but I admittedly don't get Tesla in general.


But Autopilot users are supposed to be watching the road. Assuming this is the case, if the user had seen the vehicle he ought to have applied the breaks.


As a famous engineer said, "and if my grandmother had wheels, she'd be a wagon."

Something to consider is that a "good" risk analysis takes into account the foreseeable use and misuse of a product based on the information available. Given that it is no secret that AutoPilot users have used the system contrary to directions, it is reasonable to demand that the risk analysis be updated to account for this reality (assuming it did not account for this) as part of the continuous PLM process.


The risk analysis certainly needs to account for misuse, but that doesn't necessarily mean that the product needs to be designed to prevent it. If cars were designed that way, they'd all be governed to 85MPH and refer the driver for mandatory training if they failed to signal for turns too frequently.


We could certainly ask. It may be that they are legally required to publish certain data but not certain other data. It might be the NTSB that determines this, or the privacy laws in the state where the crash took place. Tesla has been pretty transparent, or has appeared to be, for years now.

To posit a deliberate conspiracy to obscure unflattering facts seems unnecessary when the company and Musk himself have faced up to unflattering facts before — largely in order to publicly spin them in a positive light, admittedly, but the facts weren't suppressed.


Maybe that particular data didn't get transmitted / got corrupted?


Would they not have said something to that regards then?

It does appear, so far, that these "log dumps" are being carefully sifted through and cherry picked for release.


Maybe Tesla has ownership of this data and only publishes the data that helps their case.


Why are we left to guess about what data exists?


If I were to log door opening events, I'd log what switch/sensor detected that event. Just seems like the natural thing to do to me, and not an indicator of anything particular.


Especially since they have different behavior. The outside handle can't open the door while locked, but the inside handle can.


Door ajar is a pretty common sensor. And sometimes I think this is wired as "a door is ajar" the ECM doesn't even really know which door is ajar. Door opened by inside vs. outside handle seemed to me like it was definitely taking it to the next level?


I don't think so. For one, doors can usually distinguish which handle was opened if simply to decide whether to fire an alarm or not in case the door wasn't unlocked with the keys.

From a code perspective, I know for a fact that it'd log which handle was used to open a door, just as reflex programming. The same way I'll log a user ID or any potentially useful value in scope for debugging.


Tesla says that only the passenger side was opened at first, and the driver side was only opened from the outside. And yet the accident was on the right side of the vehicle, so if either door would have had trouble opening, it would be that one. If Tesla's information is correct--and isn't a sensor malfunction or similar--then my hypothesis would that the driver was not even in the driver's seat. However, we're also told that steering and braking happened before the car came to a stop, so is the idea that the driver climbed over to exit the damaged passenger side rather than the untouched driver side? Possibly while running from the noise? Alternatively, the logs are incorrect.

Tesla also says that there was an "abrupt steering action" that preceded hitting the first post, and that the car was not auto-steering at all for the other 11 posts. There's some fuzziness in Tesla's description. The implication is that the driver hit the brakes immediately, but it's only an implication. No timestamps are provided, nor clarifying language beyond the broad implication.

I'm generally thinking Tesla's information is probably correct, BUT I'm really, really, really tired of hearing about how two seconds of no-hands is somehow unreasonable, when the car itself allows for much, much longer. If you want to say that someone has to keep a hand on the wheel, have it beep and prepare to disengage after two seconds of no-hands, not 15 minutes. If you allow for 15 minutes, don't be shocked when people use all 15 minutes.


  As road conditions became increasingly uncertain, the vehicle
  again alerted you to put your hands on the wheel. No steering
  torque was then detected until Autosteer was disabled with an
  abrupt steering action. Immediately following detection of the
  first impact, adaptive cruise control was also disabled, the
  vehicle began to slow, and you applied the brake pedal.
It's actually not clear the order of events from Tesla's letter, but I read this to mean the abrupt steering action came after the first first impact.


It was not two seconds, it was two minutes (in the response.)

The car starts to alert you by beeping when it doesn't detect hands for a few minutes and displays a message on the dash to grab the wheel. It does this a few times (every ~10 seconds I think) after which if there are still no hands on the wheel, it starts to slow down and come to a complete stop.

In this case, it looks like the driver disabled Autosteer by abruptly tugging on the steering wheel after two alerts.


Two replies, each of them assuming something completely different from Tesla's response. Tesla's communication is so poor, it's hard to see that as anything other than deliberate.


It is impressive of Tesla that they are doing this in-depth logging. I don't think none of the other non-electric vehicles log this much.


In terms of how this may work, most cars have a CAN-bus (or a few) which is basically a stream of events which happen in a car. For example when you press the volume down button on the steering wheel it sends a message on the bus, then the radio receives it and turns the volume down. The same is true for most things in the car, so all Tesla needs to do is attach a device to the bus which logs everything. Other car companies could do the same relatively easy.


Impressive? Or immoral?

When I purchase a $100,000 vehicle, I want the vehicle to act under my control. Not under Tesla's control.

The sensor data belongs to Tesla, not to the customer. And Tesla uses the sensor data to serve their Public-Relation benefits instead of helping the individual customer.


Is there any mechanism that will disable all this logging?

> When I purchase a $100,000 vehicle, I want the vehicle to act under my control. Not under Tesla's control.

I feel the same way, and kind of find it intrusive that Tesla can seemingly obtain this data at will without my explicit permission.


I'd bet it's mentioned in the paperwork somewhere when you buy it. Alas I haven't had the good fortune of being able to go through the purchasing process, but give me $100,000 and I'll find out.


> I'd bet it's mentioned in the paperwork somewhere when you buy it.

It probably is - but that doesn't make it feel better. I want to explicitly allow log collection upon request, or disable it completely.

Where my car goes, how fast it was driving, which window I had open, whether or not I was listening to AM or FM radio - is frankly none of Tesla's business. As it is - these logs are seemingly only used to defend Tesla's PR - which as a simple customer, is not something I'm interested in.


They do ask for explicit permission.


My car is getting better all the time because of that sensor data they collect, and I knew what I was getting into when I bought it. It's not like they hide the fact that the car is constantly connected to the mothership.


It is immoral however if they cherry-pick sensor data and issue press reports that only favor Tesla consistently.


If you say so. I merely object to the part where you say "instead of helping the individual customer." I'm an individual customer and I'm being helped a lot by it.


Well that raises the question, if you want a $100K vehicle that is under your control, why would you buy a Tesla, one of the draws for which is increasing automation? For that matter, many systems are out of your control even on more ordinary cars.


>As road conditions became increasingly uncertain, the vehicle again alerted you to put your hands on the wheel.

why not publish the video from the car's Autopilot for say last 30 seconds?

Looking at the photos, the road seems straight and with proper lane markings well visible. Though, given that it was 2AM, may be Autopilot has issues at low-light - if say they use narrow lenses (for cost as well as depth of field reasons) then it would mean more sensitive sensor (more noise) or longer exposure (less FPS) and relationship of colors of objects are different when illuminated by headlights instead of Sun - all this making it harder to discern the objects. Yet it all would be just a technical reasons that should have been solved before product release.


Even if that info is saved locally, video is certainly not uploaded to Tesla's servers over the air.


There's no facility to record the video in the car either. The video is processed directly in the camera module, and only a high-level summary is sent out from there. There's no connection with the bandwidth needed for video, and not enough storage in the camera module to save it there.


>There's no facility to record the video in the car either.

that is really hard to believe. Given how useful dash cams are in cases of accidents, etc..

>and not enough storage in the camera module to save it there.

and how all these dash cams do it? Storing 8hrs or more. If Tesla didn't put an SD card there, it sounds really not smart.

To "mikeash" below: thanks for the link. It does seems like a small camera (thus one can expect low light issues). So Tesla uses 3rd party system - no miracles here, i hoped that they developed their own and thus hoped that they would improve it fast. That using of the 3rd party system explains while there are a lot of things missing that one would naturally expect in the sensing system of an "autopilot" functionality. It is also explains why they are so defensive instead of just going ahead and fixing issues.


Autopilot actually does better at night. Better contrast. I think it may use infrared, but whatever it's doing, it doesn't have issues with low light.

What functionality is missing that you'd expect it to have? For the small number of sensors Tesla has (one camera, one radar, a dozen short-range ultrasonics), the system is amazing, and it's the best one commercially available right now (as verified by many independent tests, that's not just Tesla talking).

The system is from Mobileye, which is currently the best in the business. I don't see why this would hurt improvement (the system has improved dramatically through software updates) or why it would make Tesla defensive. It's not like they've tried to hide the fact that Mobileye provides the camera and image processing hardware.

(Note that Tesla has parted ways with Mobileye and the next generation of Autopilot is going to be a Tesla product. Not sure why, but it sounded like Mobileye delayed their next generation hardware too much for what Tesla wanted. But not really relevant to the current hardware.)


> at night. Better contrast.

I think we have completely opposite understanding of things here.

>What functionality is missing that you'd expect it to have? For the small number of sensors Tesla has

exactly - small number of sensors. That is one of the main deficiencies.

>why it would make Tesla defensive.

because they can't improve it.


"I think we have completely opposite understanding of things here."

OK? I'm telling you how it actually is. The system works better at night, because there's better contrast. At night, the lane markers are lit up by your headlights, and the road is very dark. During the day, the difference in color is much less distinct, and as such the system doesn't perform as well. I've seen this in action with my own eyes during thousands of miles of Autopilot driving.

The small number of sensors has nothing to do with using Mobileye technology. Tesla easily could have incorporated multiple cameras or radars, they just didn't. And that's not missing functionality, that's missing hardware. You haven't given me any functionality you expect the system to have that it doesn't.

"they can't improve it"

Of course they can. Did you not read the part in the comment you're replying to where I said, "the system has improved dramatically through software updates"?

I don't mean to offend you with this question, but do you actually know anything about this stuff, or are you just guessing? It's getting tiresome to correct all of your incorrect statements.


Here's a teardown of the camera module:

http://skie.net/skynet/projects/tesla/view_post/7_Autopilot+...

I'm not entirely sure what the connector is, but I'd wager it's a standard CAN bus, which wouldn't be suitable for video data.

Dash cams do it by recording to a big SD card. I have a 32GB card in mine. Obviously Tesla could do something similar, they just don't.


The Tesla response talks about Autosteer is that different from Autopilot or has there been a name change.

If I'm reading this correct all the crashes against the post are supposedly with the drivers hand on the wheel? If thats the case then maybe the owner wasn't aware of the logging when he tried those claim.

I also depend on application log being correct but errors in logging is a scary thought in the sense that how hard it would be prove (or near impossible) that what you are saying the truth. And even beyond weird bug what happens when malicious actors change computer record and takes away all our proof of innocence (aka The Net). I wonder if there is really solid protection against such act possible. I want things more accessible but can you truly make accessible + safe work together (not necessarily now but long in the future). I do sure hope so.


It wouldn't stop deceit if done at the engineering level, but one thing Tesla could do to gain trust in its logs is to let its drivers digitally sign a version of them. That way, if it ever lands in court (or some other form of arbitration), both sides could verify that Tesla's version and the driver's version match up -- that neither side has tampered with anything.

(I don't believe Tesla has ever tampered with or lied about what the logs say; this would merely address folks who have such doubts).


Autopilot is an umbrella term that includes autosteer, traffic-aware cruise control, "summon" (moving the car forward or backward with the key fob), and automatic parking. Depending on who you ask, it may also include safety features such as automatic emergency braking and side collision avoidance.


Maybe cars should be fitted with a third party black box, which can have logs streamed from the car and also record voice and alerts in the cabin.


I believe Autopilot is the umbrella term for features including Autosteer, Autopark, Auto lane change, etc.


If the car's sensors were faulty, then the logs would be in error too.

It's wouldn't be possible to say for sure whether the driver or the log is correct unless the sensors are physically recovered and tests.


Not necessarily - sensors are usually picked to be independent of each other. So e.g. if the camera was malfunctioning, the output of the accelerometers and door sensors could (probably) still be trusted. Impossible to say without the raw data though.


> No steering torque was then detected until Autosteer was disabled with an abrupt steering action. Immediately following detection of the first impact...

Well, which way did the driver steer? Away from the barrier to avoid more damage, like he claims, or into it causing the accident in the first place? This statement is ambiguous and I get the feeling from reading the whole PR response that Tesla is either omitting or cherry-picking information to obfuscate the truth.


This is __very__ ambiguous:

Mr. Pang states that:

    > the car suddenly veered right
And later that he:

    > managed to step on the break, turn the car left and 
    > stopped the car
So, that's __two__ steering motions (one to veer right and another to turn left and stop.)

Tesla only says:

    > No steering torque was then detected until Autosteer 
    > was disabled with an abrupt steering action. 
    > Immediately following detection of the first impact, 
    > adaptive cruise control was also disabled, the 
    > vehicle began to slow, and you applied the brake 
    > pedal.
That's one steering motion -- no indication of direction or magnitude. Also, Tesla's language deliberately avoids ordering the events. Since the "steering action" appears first in the paragraph, it seems as though this happened before the impact. That information is not actually encoded in the article, but a reader would naturally assume this is the case (I know I did).

Short of a publicly released log file or an investigation by a third-party, I don't think we'll ever know the truth of what happened.


I think Tesla should be forced to hand over all the log-data rather than pick-and-choose kind of log-data in order to not taking responsibility of an accident.


Roadster owners can download raw logs straight from their cars.[1]

For Model S/X, you need a proprietary device to connect to the car's ethernet port (it may be possible to build your own -- not sure if they are encrypted), or you need Tesla to download them and hand them over. Tesla has told me that they will only release logs when ordered to do so by a court, but that they will download and hold them locally upon an owner's request, so that if you think you will have a legal need for them you have time to make that request before they get pruned/deleted by the car.

[1] https://upload.teslamotors.com/

edit: Differentiated between Roadster and Model S/X policies.


Why? Does Dodge, ford, chevy, mercedes, etc... have any obligation to step into every crash and turn over all kinds of data? Tesla looks into more crashes than they should, and in turn are being treated more in the wrong for simply looking into it... they do better than any other car company, yet are basically punished for it by groups of people for simply looking into wrecks to see IF there is something there they could do better...


Dodge, Ford, Chevy, Mercedes, etc do not collect nor publicly release data at all, while Tesla does. Because Tesla collects and publicly releases data at their own choosing, they have an obvious motivation to release only data that supports limited liability against them.

They might not be doing that, but until and unless they make all data available every time they make any data available, it's a matter of faith.


Yeah... then what?

Police say "oh, you were doing 75mph in a 50? ticket!"... insurance rate hike, etc... how much liability would they be picking up if they exposed all of that, and how quickly would people be throwing a fit about privacy?

I know multiple people (in multiple states) who have been ticketed based on their youtube videos... im sure police would try to use the logs just the same...


Let's recap this thread: 1. gordon_freeman suggests that Tesla should be forced to disclose what they have, rather than pick and choose what to share. 2. You said that (a) other companies don't do that, and suggested (b) Tesla is being treated unfairly as a result of doing more than any of those companies. 3. I highlighted that, as I believe gordon_freeman was saying, the difference is that Tesla is already collecting that information and using it very selectively, putting them in a class of one. If they were to keep all of the data to themselves, that would be fine, or if they were to release it all, that would be fine, but as it is, nobody really has any reason to believe them.

And now here we are. I'm not sure why you're jumping to liability questions about logs of speeding and bringing up privacy (amusing, that one, when it comes to Tesla's public descriptions of events in their cars). I think the point is clear. Tesla, either provide evidence for your claims, or stop making claims. Simple.


The devil is in the details.


When will people learn to stop lying about the behavior of a car that keeps logs of everything it does?


Human memory is extremely fallible. It's likely that they think they are telling the truth, but their recollection of what happened isn't very accurate.

For example, a few months ago someone drove a Tesla into a wall by flooring the accelerator while parked. They said they had their foot on the brake. I saw a ton of people criticizing the driver for lying. But it's almost certain that the driver mixed up the pedals and really did think they were on the brake. They're not lying, they're just wrong.


> Human memory is extremely fallible. It's likely that they are telling the truth, but their recollection of what happened isn't very accurate.

I think you meant to say they believe they are telling the truth.


Yes, good point. I edited.


So how does this work? Do we trust the car or the man?

If the car does not detect any steering input, even though it has been given (due to bad software or hardware), or logs a warning being sounded, even though none manifested in the real world, what do we do? How do we know?

Bugs are by definition human assertions of computer failure, the computers can't lie (yet), but they can be wrong.

Are inward-facing dash-cams going to be required so that we have evidience of compliance with the terms of service of our vehicles?

I think it is rather unlikely that a false steering input detection, a false warning log, and a false door-opening log all happpend at once...perhpas not in thic case, but subtle, cascading bugs can and do happen.


> So how does this work? Do we trust the car or the man?

Well, until there is a free tool to dump the logs from all of the Tesla cars, and Tesla the company starts releasing the raw data for the public to verify, the more appropriate question is:

Do we trust the man or the company?

Telsa has a _lot_ to lose, so the temptation to cherry pick data points that tell their desired narrative is fairly strong.


I'd say trust the car...

It would be easy to correlate one sensor failure and one part of the crash (detected hands on the wheels, kept it going int autopilot - because the sensor failure would lead to the crash)

But to say that sensors failed, then warning sounds failed, then failsafes failed, becomes rather unbelievable.

Also the high drama content of the customer's letter (there is no 50 foot drop off, i just checked on streetview for that entire section of route 2). There is a railroad, then a river, but not a steep drop. Even then, railroads pretty much stop cars dead trying to cross them (dont ask how i know, lets just say a 1986 ford tempo can go from about 45mph to a dead stop real fast)


Agreed that the car data is more reliable than the human.

But since we don't have the logs themselves, every conclusion we come to is pure conjecture and obscured by cloudy language.


1. The driver acknowledged that he should have turned off autopilot, but the focus of the complaint is that the car did not decelerate after hitting the first post.

2. The driver claims that the warning sound never came on. If the indicator was buggy, logs would not necessarily show that if the car thought it had played the sound but the speaker malfunctioned.

3. The logs haven't been released AFAICT and since Tesla motors isn't under any oath not to lie, there's no proof that they're telling the ttuth here either.


#1 is an interesting case. There are two systems, auto-steer, and adaptive cruise control. Adaptive cruise control is not generally built to hard brake after detecting an accident. Tesla will keep on cruising after an accident as long as the system is not deactivated (driver touches the brake).

I'm guessing this functionality is because the risk of hard braking at the wrong time outweighs the risk of hard braking at the right time, and so it's better to tell drivers the car will not hard brake, and it's up to them. There must be some trade-off, because certainly the car is capable of hard-braking itself, it's just a matter of ensuring the software only does it at just the right times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: