Hmmm I don’t believe so? Zen is at least portrayed in its conclusion as a true, autobiographical story (with obviously dollops of philosophical musings).
Yes. Is this a failed left turn where the Waymo became trapped in the intersection? Did something keep the Waymo from exiting via a left turn?
There's a brief glimpse of a large pickup truck stopped in the intersection.
A few more seconds of video before or after would help a lot. Maybe Waymo will post video from their side.
Yes, there could be actual good reasons for this. Maybe a car illegally turned left in front of the waymo and the waymo did a sharp turn to avoid a collision.
Rust has a method for enforcing better memory safety. That is great for deployed applications, but can be annoying when you’re still exploring / mutating your code to figure out the right shape of things.
The article says it: “We determined that due to the persistent orientation mismatch of the towed pickup truck and tow truck combination, the Waymo AV incorrectly predicted the future motion of the towed vehicle.”
It was detected, but it predicted the truck would move in a way that it didn’t end up moving.
I don't find that acceptable in any way, no human driver is going to do that and by that I mean no human driver is going to drive into something just because it moved in a way they didn't expect. they're going to slam on the brakes and the only way that's going to happen is if momentum is too high.
I understand we have to have explanations or we can't fix them, but it's just as important to understand this should never have happened even WITH the described failure.
If I had to guess, there's code to avoid stopping at every little thing and that code took precedence (otherwise rides would not be enjoyable). And I get the competing interests here but there must be a comparison to humans when these incidents happen.
> no human driver is going to drive into something just because it moved in a way they didn't expect
I would actually put money that this is the cause of most crashes involving multiple moving cars. Hell, a friend of mine got into an accident two weeks ago where they t-boned somebody that turned onto a median when they didn't expect it.
> no human driver is going to drive into something just because it moved in a way they didn't expect.
This is literally the cause of almost every human accident.
Imagine you're driving. There's a car in front of you, also driving, at the same speed as you. Do you immediately slam on the brakes? No, because you EXPECT them to keep driving. That is how driving works.
If, suddenly, they do something unexpected - like slam on the brakes, that might cause an accident. Because ... they moved in an unexpected way.
I honestly can't even figure out what you meant to say.
If I have to choose between driving next to the nitwit texting or the software that might get tripped up in really unusual situations, I’m going with the software.
How do you drive into a solid war if you have lidar though? To say nothing of predictions, the object is where it's at.... where it's at at that moment. You don't need to predict where it's at now... because you know where it's at.
You can't drive if you only use the current "frame" of data as the basis for your decision. Imagine driving on the highway, a comfortable distance behind a lead vehicle.
The planning software would want to slam on the brakes without predicting that the blob of sensor data in front of you is going to continue moving forward at highway speeds. That motion prediction enables the planning software to know that the space in front of your vehicle will be unoccupied by the time you reach it.
A similar prediction error was the reason Cruise rear ended the bendy bus in SF a while back. It segmented the front and rear halves of the bus as two separate entities rather than a connected one, and mispredicted the motion of the rear half of the bus.
> That motion prediction enables the planning software to know that the space in front of your vehicle will be unoccupied by the time you reach it.
I think we're all on the same page about this part but what's confusing and hilarious is why would the correct answer ever be to drive into an unmoving object?
If they tried to avoid the truck and swerved and hit a different vehicle there would be no confusion here. But the self driving algorithm is effectively committing suicide (Kamikaze). That's novel.
My guess is that the self-driving car was not able to recognize the truck until it was very close and the sudden appearance of the truck is interpreted by the algorithm as if the truck is moving very fast. And the best answer in that case would be to let the truck pass (basically do what the waymo did).
But that means the lidar information about the shape not moving is being deprioritized in favor of the recognized object being calculated to move fast. A situation which could only really occur if a speeding vehicle plowed through a stationary object.
Who said it was an unmoving object? Maybe I missed something in the story, but I got the sense that this happened in motion. The towed truck would have essentially been moving forward at an angle, hanging across the lane boundary.
The brakes don’t respond immediately - you need to be able to detect that a collision is imminent several seconds before it actually occurs.
This means you have to also successfully exclude all the scenarios where you are very close to another car, but a collision is not imminent because the car will be out of the way by the time you get there.
Yes, at some point before impact the Waymo probably figured out that it was about to collide. But not soon enough to do anything about it.
My brakes respond immediatly,for all intents and purposes as a human with a reaction time. I'm at fault if the person in front of me stops and I dont have the stopping distance to avoid a collision.
I get that self driving software is difficult. But theres no excuse for this type of accident.
That might be true for the simple case of following within a lane, although you only have to drive around to realize most drivers do not leave adequate following distance at all times to make this a pure physics problem. And neither is a good driver watching only the car in front, but also the brake lights of the cars in front of that, to help anticipate the car in front's likely actions.
But take an even slightly more complex example: you're on a two lane roadway and the car in the other lane changes into your lane, leaving inadequate stopping distance for you. You brake as hard as you safely can (maybe you have a too-close follower, too), but still there will be a few seconds when you could not, in fact, avert a collision if for some reason the car in front braked.
I have no idea what the legal situation would be: is it their fault if the crash happens within 3 seconds but yours if it happens after you've had time but failed to restablish your needed stopping distance?
Honestly even in the simple one lane case, I doubt you can slam your brakes on the interstate for no reason then expect to avoid any liability for the crash, blaming your follower for following too close.
Driving has a bunch of rules, then an awful lot of common sense and social interaction on top of them to make things actually work.
A car changing lanes does indeed remove stopping distance. But thats also something human drivers are naturally more capable of understanding than waymo. It shouldn't have mattered where a vehicle is on the road. Any human is able to predict if a weirdly loaded vehicle making a turn has a chance of invading their lane and/or stopping distance. Its a complex problem for sure but that also shows that you need absolute proof that the software is able to generalise the problem. Especially if you want self driving cars to respect flow of traffic over stopping distance.
Even if your software is as good as it can be, I doubt you'll be able to get them to recognise how to resolve deadlocks. Which would also involve severe hindrance to emergency vehicles.
I don't think anyone is excusing Waymo or saying that an accident is acceptable in this situation--it's just an interesting engineering problem to speculate about, and people are trying to figure out what caused it to fail.
The average speed of a commuting car is around ~23mph, when you account for stops and red lights. 35mph is only a hindrance if you commute by freeway every day, which many people don't.
People will take them if they're priced right and they can do things in the car without having to pay attention to the road.
You can’t drive more than a few MPH unless you’re reacting based on the expected future, rather than the current one.
It’s why it’s so difficult to do (actually) and the ability to do it well is just as much about the risk appetite of the one responsible as anything else - because knowing if a car is likely to pull out at the light into traffic, or how likely someone is to be hiding in a bush or not is really hard. But that is what humans deal with all the time while driving.
Because no one can actually know the future, and predicting the future is fundamentally risky. And knowing when to hold ‘em, and when to fold ‘em is really more of an AGI type thing.
In self-driving, you are making predictions about where the object is right now based on the synthesis of data from your sensors (and often filtering information from past estimates of the object position). These might be high-precision, high-accuracy predictions, but they're predictions nonetheless.
(It's been quite some years since I worked on vision-based self-driving, so my experience is non-zero but also quite dated.)
My mom was hit by a driver when she was biking and fell with her arm in front of the wheel. The driver then decided to pull forward and drove over her arm. So humans don’t really solve that problem.
I highly recommend the course “Computer, Enhance!” by Casey Muratori on substack for those interested in how CPUs / assembly work. You get to decode byte code, simulate instructions, learn how the stack works, etc. It is really well-paced, gives you plenty of space to figure things out your own way (with reference material if you need it), and helped me get several “aha!” moments that solidified how things work.