I have trouble imagining the underlying reasons for this being particularly interesting. I guess if the lighting was so bad that even the driver couldn't see the trailer then that would excuse him from fault, but I don't see how it changes Tesla's role in it.
Short of an unholy ritual involving the supernatural, Tesla PR or Musk can only guess as to what the driver noticed in this case, and neither can accurately speak to what the either the driver or car did or did not do until the investigation is completed.
I think there's a lot to be concerned about wrt. AutoPilot and I plan on thoroughly reading the completed investigation reports.
The driver never braked. It's possible that he noticed the trailer and refrained from braking for some reason, but that would be really weird.
I don't see why they have to wait for an investigation to talk about what the car did, nor about what the driver did with the car. They have all that information. The purpose of the investigation is to contextualize that information and incorporate external information like the speed and position of the truck in the time leading up to the crash.
> The driver never braked. It's possible that he noticed the trailer and refrained from braking for some reason, but that would be really weird.
He could have had a medical complication. He could also have erred in thinking that the trailer would have cleared the path in time, or in thinking the car would have braked for him, or in perhaps his reaction time was insufficient. I have no idea. None of the items on my brainstormed list strike me was weird.
> I don't see why they have to wait for an investigation to talk about what the car did, nor about what the driver did with the car.
Tesla doesn't have to, but the skeptic in me observes that it is all too easy to spread misinformation (that would benefit them) through the technology press before the investigation is completed. I personally don't see an advantage for Tesla to speak at this stage since there are serious product liability issues in play, but I admittedly don't get Tesla in general.
But Autopilot users are supposed to be watching the road. Assuming this is the case, if the user had seen the vehicle he ought to have applied the breaks.
As a famous engineer said, "and if my grandmother had wheels, she'd be a wagon."
Something to consider is that a "good" risk analysis takes into account the foreseeable use and misuse of a product based on the information available. Given that it is no secret that AutoPilot users have used the system contrary to directions, it is reasonable to demand that the risk analysis be updated to account for this reality (assuming it did not account for this) as part of the continuous PLM process.
The risk analysis certainly needs to account for misuse, but that doesn't necessarily mean that the product needs to be designed to prevent it. If cars were designed that way, they'd all be governed to 85MPH and refer the driver for mandatory training if they failed to signal for turns too frequently.
"Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky" - https://www.tesla.com/blog/tragic-loss
I have trouble imagining the underlying reasons for this being particularly interesting. I guess if the lighting was so bad that even the driver couldn't see the trailer then that would excuse him from fault, but I don't see how it changes Tesla's role in it.