> Freeways might appear "easy" on the surface, but there are all sorts of long tail edge-cases that make them insanely tricky to do confidently without a driver
Maybe my memory is failing me, but I seem to remember people saying the exact opposite here on HN when Tesla first announced/showed off their "self-driving but not really self-driving" features, saying it'll be very easy to get working on the highways, but then everything else is the tricky stuff.
Highways are on average a much more structured and consistent environment, but every single weird thing (pedestrians, animals, debris, flooding) that occurs on streets also happens on highways. When you're doing as many trips and miles as Waymo, once-in-a-lifetime exceptions happen every day.
On highways the kinetic energy is much greater (Waymo's reaction time is superhuman, but the car can't brake any harder.) and there isn't the option to fail safe (stop in place) like their is on normal roads.
I don't have any specific knowledge about Waymo's stack, but I can confidently say Waymo's reaction time is likely poorer than an attentive human. By the time sensor data makes it through the perception stack, prediction/planning stack, and back to the controls stack, you're likely looking at >500ms. Waymos have the advantage of consistency though (they never text and drive).
> but I can confidently say [...] you're likely looking at >500ms
That sounds outrageous if true. Very strange to acknowledge you don't actually have any specific knowledge about this thing before doing a grand claim, not just "confidently", but also label it as such.
They've been publishing some stuff around latency (https://waymo.com/search?q=latency) but I'm not finding any concrete numbers, but I'd be very surprised if it was higher than the reaction time for a human, which seems to be around 400-600ms typically.
Human reaction time is very difficult to average meaningfully. It ranges anywhere from a few hundred milliseconds on the low end to multiple seconds. The low end of that range consists of snap reactions by alert drivers, and the high end is common with distracted driving.
400-500ms is a fairly normal baseline for AV systems in my experience.
> MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road — with younger drivers detecting hazards nearly twice as fast as older drivers.
But it'll be highly variable not just between individuals but state of mind, attentiveness and a whole lot of other things.
Even if we assume this to be true, waymos have the advantage of more sensors and less blind spots.
Unlike humans they can also sense what's behind the car or other spots not directly visible to a human.
They can also measure distance very precisely due to lidars (and perhaps radars too?)
A human reacts to the red light when a car breaks, without that it will take you way more time due to stereo vision to realize that a car ahead was getting closer to you.
And I am pretty sure when the car detects certain obstacles fast approaching at certain distances, or if a car ahesd of you stopped suddenly or deer jumped or w/e it breaks directly it doesn't need neural networks processing those are probably low level failsafes that are very fast to compute and definitely faster than what a human could react to
Waymo "sees" further - including behind cars - and has persistent 360-degree awareness, wheres humans have to settle for time-division of the fovea and are limited to line-of-sight from driver's seat. Humans only have an advantage if the event is visible from the cabin, and they were already looking at it (i.e. it's in front of them) for every other scenario, Waymo has better perception + reaction times. "They just came out of nowhere" happens less for Waymo vehicles with their current sensor suite.
Beyond the questions about human braking, this seems worse than the dedicated AEB systems many vehicles are using now. Do they really use the full stack for this case instead of a faster collision avoidance path? I remember some of their people talking about concurrency back in the DARPA Grand Challenge days and it seems like this would be a high priority for anyone working on a system like this.
It's actually a really interesting topic to think about. Depending on the situation, there might be some indecision in a human driver that slows the process down. Whereas the Waymo probably has a decisive answer to whatever problem is facing it.
I don't really know the answers for sure here, but there's probably a gray area where humans struggle more than the Waymo.
Humans can provide a simple, pre-planned reaction to an expected event (e.g. "click when the reaction test shows a signal") within typically 250-300ms, but 500ms from vision to physically executed action for an unexpected event seems pretty optimistic for a human driver.
It's easier to get from zero to something that works on divided highways, since there's only lanes, other vehicles, and a few signs to care about. No cross traffic, cyclists, pedestrians, parked cars, etc.
One thing that's hard with highways is the fact that vehicles move faster, so in a tenth of a second at 65 mph, a car has moved 9.5 feet. So if say a big rock fell off a truck onto the highway, to detect it early and proactively brake or change lanes to avoid it, it would need to be detected at quite a long distance, which demands a lot from sensors (eg. how many pixels/LIDAR returns do you get at say 300+ feet on an object that's smaller than a car, and how much do you need to detect it as an obstruction).
But those also happen quite infrequently, so a vehicle that doesn't handle road debris (or deer or rare obstructions) can work with supervision and appear to work autonomously, but one that's fully autonomous can't skip those scenarios.
the difficult part of the highways is the interchanges, not the straight shots between interchanges. and iirc, tesla didn't do interchanges at the time people were criticizing them for only doing the easiest part of self-driving.
Everybody you replied to you made a completely different hypothesis but the waymo head itself mentioned why they waited on highways: on regular roads, if the computer fails to maneuver, you have an extremely simple, generally safe temporary solution: you just stop the car. Stopping a car is always kinda acceptable in regular roads. Its not an acceptable solution to undefined problems in the highway. This becomes important because in a Tesla theres still a requirement for a driver to be there to take care of worst case scenarios but in a waymo thats not true.
I think the key is, it's easy to get "self-driving" where the car will hand off to the driver working on highways. "Follow the lines, go forward, don't get hit". But having it DRIVERLESS is a different beast, and the failure states are very different than those in surface street driving.
"If you had asked me in 2018, when I first started working in the AV industry, I would’ve bet that driverless trucks would be the first vehicle type to achieve a million-mile driverless deployment. Aurora even pivoted their entire company to trucking in 2020, believing it to be easier than city driving.
...
Stopping in lane becomes much more dangerous with the possibility of a rear-end collision at high speed. All stopping should be planned well in advance, ideally exiting at the next ramp, or at least driving to the closest shoulder with enough room to park.
This greatly increases the scope of edge cases that need to be handled autonomously and at freeway speeds.
...
The features that make freeways simpler — controlled access, no intersections, one-way traffic — also make ‘interesting’ events more rare. This is a double-edged sword. While the simpler environment reduces the number of software features to be developed, it also increases the iteration time and cost.
During development, ‘interesting’ events are needed to train data-hungry ML models. For validation, each new software version to be qualified for driverless operation needs to encounter a minimum number of ‘interesting’ events before comparisons to a human safety level can have statistical significance. Overall, iteration becomes more expensive when it takes more vehicle-hours to collect each event.”
Maybe my memory is failing me, but I seem to remember people saying the exact opposite here on HN when Tesla first announced/showed off their "self-driving but not really self-driving" features, saying it'll be very easy to get working on the highways, but then everything else is the tricky stuff.