I editorialized the title a bit to remove the click bait.
This line of Tesla's letter back to the driver caught my eye;
The diagnostic data shows that the driver door was later
opened from the outside...
That is pretty impressive that diagnostics are not just logging that a door was open, but which handle was used to open it! I guess Tesla has had enough issues with their retracting outer handles to warrant some additional diagnostics in this area, but those logs sure do come in handy.
In this crash they give details like the door handle opening from the outside. In the Pennsylvania crash they say autopilot turned itself off 11 seconds before the crash.
In the Florida case they say 'who knows?'. How many seconds did the guy in Florida not have his hands on the wheel? How many alarms were sounded? What did the radar detect? What did the sonar say? Nothing.
If Tesla wants to be trusted they need to give out all the relevant information for all crashes, including timestamps, not pick and choose crafted statements that make their PR department happy.
The NTSB is involved in the investigation of the Florida case, but not the others. NTSB investigations are rigorous to the extreme, and move at a pace much, much slower than the media events surrounding the other crashes. Any data that Tesla has regarding the crash will be meticulously picked apart and anything relevant will be published in the comprehensive NTSB report. [1] I imagine there is some force compelling Tesla to be more quiet on that case, whether it is deference to the NTSB, or respect for the fatality. Either way, the data will become known eventually, from quite possibly the most transparent and independent source possible. I look forward to the report.
They said that the radar and camera detected the trailer but that it looked like an overhead sign and so was ignored. The sonar sensors play no role in preventing forward collisions, as they only have a range of about 16 feet. They are used to aid parking and to avoid side collisions from cars in adjacent lanes.
Since the system didn't detect any danger, I think it's safe to assume that no alarms sounded. Hands on the wheel isn't relevant, since the problem wasn't one to be solved with steering.
Aside from timestamps (which don't seem particularly useful here), this seems to include just about everything interesting.
> They said that the radar and camera detected the trailer but that it looked like an overhead sign and so was ignored.
Has Tesla or NTSB actually said that? From ref. 1, published today, my emphasis:
Tesla *is considering* whether the radar and camera input
for the vehicle’s automatic emergency braking system
failed to detect the truck trailer or the automatic
braking system’s radar may have detected the trailer but
discounted this input…
I don't think it matters much. The main thing is that the system didn't detect the trailer. Exactly why it failed to detect the trailer is interesting, but doesn't change anything about fault or responsibility. And if Tesla is still considering the possibility, then apparently it's hard enough to figure out that just releasing logs wouldn't help us.
I have trouble imagining the underlying reasons for this being particularly interesting. I guess if the lighting was so bad that even the driver couldn't see the trailer then that would excuse him from fault, but I don't see how it changes Tesla's role in it.
Short of an unholy ritual involving the supernatural, Tesla PR or Musk can only guess as to what the driver noticed in this case, and neither can accurately speak to what the either the driver or car did or did not do until the investigation is completed.
I think there's a lot to be concerned about wrt. AutoPilot and I plan on thoroughly reading the completed investigation reports.
The driver never braked. It's possible that he noticed the trailer and refrained from braking for some reason, but that would be really weird.
I don't see why they have to wait for an investigation to talk about what the car did, nor about what the driver did with the car. They have all that information. The purpose of the investigation is to contextualize that information and incorporate external information like the speed and position of the truck in the time leading up to the crash.
> The driver never braked. It's possible that he noticed the trailer and refrained from braking for some reason, but that would be really weird.
He could have had a medical complication. He could also have erred in thinking that the trailer would have cleared the path in time, or in thinking the car would have braked for him, or in perhaps his reaction time was insufficient. I have no idea. None of the items on my brainstormed list strike me was weird.
> I don't see why they have to wait for an investigation to talk about what the car did, nor about what the driver did with the car.
Tesla doesn't have to, but the skeptic in me observes that it is all too easy to spread misinformation (that would benefit them) through the technology press before the investigation is completed. I personally don't see an advantage for Tesla to speak at this stage since there are serious product liability issues in play, but I admittedly don't get Tesla in general.
But Autopilot users are supposed to be watching the road. Assuming this is the case, if the user had seen the vehicle he ought to have applied the breaks.
As a famous engineer said, "and if my grandmother had wheels, she'd be a wagon."
Something to consider is that a "good" risk analysis takes into account the foreseeable use and misuse of a product based on the information available. Given that it is no secret that AutoPilot users have used the system contrary to directions, it is reasonable to demand that the risk analysis be updated to account for this reality (assuming it did not account for this) as part of the continuous PLM process.
The risk analysis certainly needs to account for misuse, but that doesn't necessarily mean that the product needs to be designed to prevent it. If cars were designed that way, they'd all be governed to 85MPH and refer the driver for mandatory training if they failed to signal for turns too frequently.
We could certainly ask. It may be that they are legally required to publish certain data but not certain other data. It might be the NTSB that determines this, or the privacy laws in the state where the crash took place. Tesla has been pretty transparent, or has appeared to be, for years now.
To posit a deliberate conspiracy to obscure unflattering facts seems unnecessary when the company and Musk himself have faced up to unflattering facts before — largely in order to publicly spin them in a positive light, admittedly, but the facts weren't suppressed.
If I were to log door opening events, I'd log what switch/sensor detected that event. Just seems like the natural thing to do to me, and not an indicator of anything particular.
Door ajar is a pretty common sensor. And sometimes I think this is wired as "a door is ajar" the ECM doesn't even really know which door is ajar. Door opened by inside vs. outside handle seemed to me like it was definitely taking it to the next level?
I don't think so. For one, doors can usually distinguish which handle was opened if simply to decide whether to fire an alarm or not in case the door wasn't unlocked with the keys.
From a code perspective, I know for a fact that it'd log which handle was used to open a door, just as reflex programming. The same way I'll log a user ID or any potentially useful value in scope for debugging.
This line of Tesla's letter back to the driver caught my eye;
That is pretty impressive that diagnostics are not just logging that a door was open, but which handle was used to open it! I guess Tesla has had enough issues with their retracting outer handles to warrant some additional diagnostics in this area, but those logs sure do come in handy.