A pick-up truck being towed crooked and backwards. Both vehicles failed to read the situation in the same manner.
Autonomous vehicles have various redundant systems built-in that can take priority and override false positives.
I was previously under the assumption that one of the really important reasons for Lidar is that it can get you closer to an absolute truth about whether something is a solid object, and where that hypothetically solid object is relative to the position of the vehicle, regardless of what the classifier thinks it is seeing.
So did the lidar fail to read the solid object, or was the lidar, was it de-prioritized? or was it simply not available as a fallback?
Presumably Radar and proximity sensors were also involved. What were they doing?
This is a fascinating edge case, and I hope to hear about the real reason for the 2 incidents.
The article says it: “We determined that due to the persistent orientation mismatch of the towed pickup truck and tow truck combination, the Waymo AV incorrectly predicted the future motion of the towed vehicle.”
It was detected, but it predicted the truck would move in a way that it didn’t end up moving.
I don't find that acceptable in any way, no human driver is going to do that and by that I mean no human driver is going to drive into something just because it moved in a way they didn't expect. they're going to slam on the brakes and the only way that's going to happen is if momentum is too high.
I understand we have to have explanations or we can't fix them, but it's just as important to understand this should never have happened even WITH the described failure.
If I had to guess, there's code to avoid stopping at every little thing and that code took precedence (otherwise rides would not be enjoyable). And I get the competing interests here but there must be a comparison to humans when these incidents happen.
> no human driver is going to drive into something just because it moved in a way they didn't expect
I would actually put money that this is the cause of most crashes involving multiple moving cars. Hell, a friend of mine got into an accident two weeks ago where they t-boned somebody that turned onto a median when they didn't expect it.
> no human driver is going to drive into something just because it moved in a way they didn't expect.
This is literally the cause of almost every human accident.
Imagine you're driving. There's a car in front of you, also driving, at the same speed as you. Do you immediately slam on the brakes? No, because you EXPECT them to keep driving. That is how driving works.
If, suddenly, they do something unexpected - like slam on the brakes, that might cause an accident. Because ... they moved in an unexpected way.
I honestly can't even figure out what you meant to say.
If I have to choose between driving next to the nitwit texting or the software that might get tripped up in really unusual situations, I’m going with the software.
How do you drive into a solid war if you have lidar though? To say nothing of predictions, the object is where it's at.... where it's at at that moment. You don't need to predict where it's at now... because you know where it's at.
You can't drive if you only use the current "frame" of data as the basis for your decision. Imagine driving on the highway, a comfortable distance behind a lead vehicle.
The planning software would want to slam on the brakes without predicting that the blob of sensor data in front of you is going to continue moving forward at highway speeds. That motion prediction enables the planning software to know that the space in front of your vehicle will be unoccupied by the time you reach it.
A similar prediction error was the reason Cruise rear ended the bendy bus in SF a while back. It segmented the front and rear halves of the bus as two separate entities rather than a connected one, and mispredicted the motion of the rear half of the bus.
> That motion prediction enables the planning software to know that the space in front of your vehicle will be unoccupied by the time you reach it.
I think we're all on the same page about this part but what's confusing and hilarious is why would the correct answer ever be to drive into an unmoving object?
If they tried to avoid the truck and swerved and hit a different vehicle there would be no confusion here. But the self driving algorithm is effectively committing suicide (Kamikaze). That's novel.
My guess is that the self-driving car was not able to recognize the truck until it was very close and the sudden appearance of the truck is interpreted by the algorithm as if the truck is moving very fast. And the best answer in that case would be to let the truck pass (basically do what the waymo did).
But that means the lidar information about the shape not moving is being deprioritized in favor of the recognized object being calculated to move fast. A situation which could only really occur if a speeding vehicle plowed through a stationary object.
Who said it was an unmoving object? Maybe I missed something in the story, but I got the sense that this happened in motion. The towed truck would have essentially been moving forward at an angle, hanging across the lane boundary.
The brakes don’t respond immediately - you need to be able to detect that a collision is imminent several seconds before it actually occurs.
This means you have to also successfully exclude all the scenarios where you are very close to another car, but a collision is not imminent because the car will be out of the way by the time you get there.
Yes, at some point before impact the Waymo probably figured out that it was about to collide. But not soon enough to do anything about it.
My brakes respond immediatly,for all intents and purposes as a human with a reaction time. I'm at fault if the person in front of me stops and I dont have the stopping distance to avoid a collision.
I get that self driving software is difficult. But theres no excuse for this type of accident.
That might be true for the simple case of following within a lane, although you only have to drive around to realize most drivers do not leave adequate following distance at all times to make this a pure physics problem. And neither is a good driver watching only the car in front, but also the brake lights of the cars in front of that, to help anticipate the car in front's likely actions.
But take an even slightly more complex example: you're on a two lane roadway and the car in the other lane changes into your lane, leaving inadequate stopping distance for you. You brake as hard as you safely can (maybe you have a too-close follower, too), but still there will be a few seconds when you could not, in fact, avert a collision if for some reason the car in front braked.
I have no idea what the legal situation would be: is it their fault if the crash happens within 3 seconds but yours if it happens after you've had time but failed to restablish your needed stopping distance?
Honestly even in the simple one lane case, I doubt you can slam your brakes on the interstate for no reason then expect to avoid any liability for the crash, blaming your follower for following too close.
Driving has a bunch of rules, then an awful lot of common sense and social interaction on top of them to make things actually work.
A car changing lanes does indeed remove stopping distance. But thats also something human drivers are naturally more capable of understanding than waymo. It shouldn't have mattered where a vehicle is on the road. Any human is able to predict if a weirdly loaded vehicle making a turn has a chance of invading their lane and/or stopping distance. Its a complex problem for sure but that also shows that you need absolute proof that the software is able to generalise the problem. Especially if you want self driving cars to respect flow of traffic over stopping distance.
Even if your software is as good as it can be, I doubt you'll be able to get them to recognise how to resolve deadlocks. Which would also involve severe hindrance to emergency vehicles.
I don't think anyone is excusing Waymo or saying that an accident is acceptable in this situation--it's just an interesting engineering problem to speculate about, and people are trying to figure out what caused it to fail.
The average speed of a commuting car is around ~23mph, when you account for stops and red lights. 35mph is only a hindrance if you commute by freeway every day, which many people don't.
People will take them if they're priced right and they can do things in the car without having to pay attention to the road.
You can’t drive more than a few MPH unless you’re reacting based on the expected future, rather than the current one.
It’s why it’s so difficult to do (actually) and the ability to do it well is just as much about the risk appetite of the one responsible as anything else - because knowing if a car is likely to pull out at the light into traffic, or how likely someone is to be hiding in a bush or not is really hard. But that is what humans deal with all the time while driving.
Because no one can actually know the future, and predicting the future is fundamentally risky. And knowing when to hold ‘em, and when to fold ‘em is really more of an AGI type thing.
In self-driving, you are making predictions about where the object is right now based on the synthesis of data from your sensors (and often filtering information from past estimates of the object position). These might be high-precision, high-accuracy predictions, but they're predictions nonetheless.
(It's been quite some years since I worked on vision-based self-driving, so my experience is non-zero but also quite dated.)
One of the best things I've learnt recently is how to apply the zero blame, process improvement approach that (many) air safety regulators take to my own teams.
I'd sat through 'five whys' style postmortems before, but it was reading air safety investigation reports that finally got me to understand it and make it a useful part of how we get better at our jobs.
By comparison, the way we're investigating and responding to self-driving safety incidents still seems very primitive. Why is that?
One difference with this situation in terms of the public perception/discussion though is that, say in the 1960s, air safety wasn't very good compared to today, but still there was no question of eliminating air travel altogether due to safety issues. Today there is definitely an anti-self-driving contingent that would like to hype up every accident to get the self driving companies shut down entirely.
Another comparison with air safety, the disaster risk threshold is high enough to ground vehicles with suspected faults or flaws.
In this case two self-driving cars crashed into another road vehicle because they failed to recognise (in time) which direction it was moving. Waymo should be commended for having voluntarily issued a software recall, but this problem is severe enough that the decision shouldn't really be up to Waymo's good judgement.
There is an explicit culture and mechanism of blamelessness around safety concerns and minor violations/deviations, which is incredibly helpful. Read about the ASRS* program (admin'd by NASA, with anonymity for non-intentional issues, prohibition on use of submissions for enforcement purposes, and explicit "get out punishment" card from the FAA): https://asrs.arc.nasa.gov/overview/immunity.html
I'd also read a bunch of aviation reports: https://www.ntsb.gov/Pages/monthly.aspx (more detailed reports are available approximately 2 years after the occurrence date and more details are available for fatal or air carrier occurrences, so if you don't care which ones to read, filter for those to start).
If you're more video oriented, watch @blancolirio, @NTSBgov, @AirSafetyInstitute, or @pilot-debrief. (I'd skip @ProbableCause-DanGryder.)
For a short summary, there is an intense focus on determining the facts (who, what, when, where, maybe some guesses as to why) and drawing conclusions about primary and contributing causes from there.
Sounds like they were relying solely on their neural network path prediction, which failed when the truck was dragged at an odd angle.
A simple lidar moving object segmentation, which doesn't even know what it's looking at but can always spit out reasonable path predictions, would probably have saved them.
I think Mobileye is doing something like this, but they release so little data, which is always full of marketing bullshit, that it is hard to know what exactly they are working on.
It's unlikely to be neutral network based. This sounds like a model prediction failure. You take a mathematical model of car motion: the rear wheels generally don't steer. The front steered wheels can cause the car to drive along an arc. If you want to predict the arc that will be driven, you take the initial starting heading of the vehicle and project forward in time based on your understanding of the vehicles steering angle. For most "driving in lane at velocity" cases, you generally would assume that the vehicle has very little steering angle input.
We're now getting to see where autonomy needs to develop "spider sense": the scene in front of me feels wrong because some element isn't following the expected behavior maybe in ways that can't really be rationalized about, so we'll become much more conservative/defensive when dealing with it.
I'm thinking it might make sense to have a sort of hierarchy of models. The stupidest model predicts that everything will be stationary. The second model predicts that everything will travel in a straight line. The third model tries to predict a circular arc based on same fusion of path history and observed steering input. Fourth model uses a notion of action to predict whats going to happen, like "the car is changing lanes". Fifth model uses body language and common sense to predict intention "the pedestrian wants to cross".
Each model can potentially predict longer into the future but also has more complexity and things that can go wrong. So you keep track of how well each model is doing (on an object basis) and if one level is failing then you fall back on a stupider one. You might also want to increase caution if your models are not doing well (lower speed and increased safety distance).
These cars can and do slow down or even stop and wait for human assistance in response to unexpected situations. I'm actually quite surprised this didn't trigger here, although we don't really know much about the specifics of the situation.
I wish there was a picture of the strange towing configuration. I wonder if I would be confused as well, although my guess is that I’d read the situation correctly
This is what people don't appreciate when quoting those statistics about how self-driving cars are safer than humans: when a human driver causes an accident, it was because that particular person did something wrong. When a self-driving car handles a situation wrongly that's a big issue, because all the self-driving cars run the same software.
On the other hand when a human driver causes an accident one driver learns a lesson (maybe). When a self driving car causes an accident all cars get to learn from it.
Humans don't have bugs. they may have disease, or mental troubles, but we're pretty good at assessing them. Thanks in part to their ability to communicate.
I'd argue that humans don't have bugs but their mental models of things do, mostly as a consequence of the fact a model's complexity increases with it's accuracy.
This is my biggest fear with self-driving cars. Correlated failures. As a society we are extremely good at dealing with independent accidents. We can calculate very precisely how many people will die of traffic in a given year and we can account for it, we can have insurances, and we can decide exactly how much we are willing to spend to save a life on the margin.
But if everything is fine, everything is fine, everything is fine, and then all hell breaks lose? We are not as good at dealing with that.
My fear is similar, but more along the lines of adversarial attacks as various weaknesses are exposed. Imagine people taking advantage of zero-day exploits that cause self driving cars to veer off the road, stop suddenly, collide, etc. It is really not that far fetched. This technology is very far away from maturity, IMO.
Your run of the mill hacker sure, but adversarial nations about to invade another nation and want to keep the Western World occupied? DC was brought to a crisis by the sniper; can you imagine if cars suddenly started plowing through school pickup lines or concert venues with no way for occupant to stop it? Or even without an occupant?
As someone else hinted, there may be nothing to "fix", but rather this seems like a specific situation that it had just never encountered before. Adjusting the model to cause a safe response to that particular single rare situation (either manually or by accidental training) does not solve the apparent problem that the machine is not able to comprehend the world.
At least that is how I (as a non-expert) imagine these models work -- the model has an excellent chance of crashing at every new unique situation it encounters out of a nearly unlimited set of possible situations (which implies a high frequency of encountering new situations).
So in future your self driving car might be recalled at any random time because a new corner case from an infinity was found. If the car can't be driven by a human all owners of thsi cars will be stuck wherever they were at that moment.
This will be interesting to watch. If I bought an autonomous car and the autonomous mode was disabled for a few days or even weeks while a bug was fixed I can always fall back to driving it myself. If that's not possible (maybe because the car doesn't support it or I didn't have a licence or human drivers had been banned) and suddenly it's a whole different situation. Of course owning a car might become less common in itself, if you're just taking them like an Uber you can always switch to a different company.
Yeah, there's nothing akin to a software update that would cause the entire fleet of human drivers to start driving badly or unexpectedly all at once.
We also know how to hold individuals accountable for independent accidents. We know we won't get justice when people will inevitably start to get killed by standard corporate greed, incompetence, enshittification.
> there's nothing akin to a software update that would cause the entire fleet of human drivers to start driving badly or unexpectedly all at once.
All, no. Enough to make a difference, easy.
Black Friday in February! In-Store offer only!
$99 Playstation 5 to first 100 customers at each Walmart location!
You can bet there will be a significant increase in people driving badly. edit: Make it Taylor Swift tickets, and you can increase the size of the frenzy.
Not even a cheap console or taytay tickets needed. I once had a lady with 5 kids in the car go to ram me when I went around her to get into the car park while she was waiting in line for the KFC drive through. Nuts lol
I imagine this can be remedied by slow rolling non critical updates out so that the entire fleet doesn't get upended by Bobby Tables at once. You could trivially observe the daily change in accidents/collisions/whatever and adjust fire
Works great until the problem condition is not evenly distributed in place and time. Imagine that the release goes out in June but can’t handle icy roads; or the release goes out and can’t handle leap years; or it goes to cars in Iowa but has a problem interpreting ocean mist.
That remedy already applies to massive profitable services like Google and Facebook, yet they still have outages caused by sloppy configuration pushes.
I notice, when e.g. it rains enough for long enough, a lot if people get annoyed and start driving badly. You get some kind of contagion where everybody gets annoyed and tired because all other drivers are annoyed, tired assholes. Sometimes it even persists after the bad weather, especially if there was gridlock and/or end of workday. Humans do have a (lighter) version of collective bad driving.
that's one reason crash traffic happens and stays for hours after the original reason for the traffic to happen in the first place is gone. I remember reading that one person applying their brakes too hard can have a knock on effect causing a traffic pattern to emerge for a long time after the initial braking ever occurred.
Yeah, I wish this was emphasized more in driving school. I wonder how much traffic could be reduced just by making people conscious of their traffic-causing behaviors. (for example, using your brakes to slow down when there is nothing in front of you and others behind you)
Do we have a picture of the truck? I'm having difficulty imagining it given that surely the tow truck would want the towed vehicle in-line to make driving go smoothly?
The towed vehicle has its rear driven wheels up on a tow hook type of tow truck and It sounds like a locked steering wheel that has been turned. This would lead to the angled tracking of the front wheels.
This would be common for a debt recovery or when a city impounded the vehicle where it's taken without cooperation of the owner.
[Recycled from a older submission] Well, I feel kinda vindicated by this news, after previously noting:
> People worry that ways and times [self-driving cars] are unsafe (separate from overall rates) will be unusual, less-predictable, or involve a novel risk-profile.
In this example, having a secretly cursed vehicle configuration is something that we don't normally think of as a risk-factor from human drivers.
_______
As an exaggerated thought experiment, imagine that autonomous driving achieves a miraculous reduction in overall accident/injury rate down to just 10% of when humans were in charge... However of the accidents that still happen, half are spooky events where every car on the road targets the same victim for no discernible reason.
From the perspective of short-term utilitarianism, an unqualified success, but it's easy to see why it would be a cause of concern that could block adoption.
Autonomous vehicles have various redundant systems built-in that can take priority and override false positives.
I was previously under the assumption that one of the really important reasons for Lidar is that it can get you closer to an absolute truth about whether something is a solid object, and where that hypothetically solid object is relative to the position of the vehicle, regardless of what the classifier thinks it is seeing.
So did the lidar fail to read the solid object, or was the lidar, was it de-prioritized? or was it simply not available as a fallback?
Presumably Radar and proximity sensors were also involved. What were they doing?
This is a fascinating edge case, and I hope to hear about the real reason for the 2 incidents.