Disagree. If you screen your test drivers, you no longer have adequate testing. Your test drivers need to be representative (as much as possible) of the general population.
No, Waymo's test drivers aren't supposed to be representative of future customers, they're supposed to babysit the cars until drivers are no longer needed at all. That's why their Firefly prototype didn't even have a steering wheel.
The placement of the gas pedal does seem strange in retrospect if we think about people falling asleep in the car. I'd also be worried about people pitching forward in their sleep and banging their head against the steering wheel and activating something they shouldn't.
On 26 Sep. 1982 in 'Knight of the Phoenix' when Michael falls asleep KITT automatically takes over.
Google should have a camera pointing at the driver and use machine learning to detect someone that is asleep or medically impaired and automatically take over if the car starts to leave its lane.
Perhaps the driver could configure where it wants the car to go in such a case (if asleep: continue to final destination; if medical issue: drive to nearest hospital and call "Parent" or "Spouse" on cell phone and explain the situation; if smart watch can detect alcohol in your sweat therefore unconsciousness likely due to intoxication: drive to a trusted friend and call ahead, etc.).
At the very least, if the person appears to be asleep it could ask "Are you sure you want to return to manual control?". It should be able to tell the difference between someone sleeping and someone alert wanting control back.
It's been some decades since fiction inspired us, but we're getting close!
Maybe later, but there are an awful lot of fuzzy heroic heuristics in there. For now, I'd say the only safe thing to do when the driver falls asleep is come to a safe stop.
Under the circumstances, I'd accept just slowly rolling to a stop with the hazard lights on. People behind the car will hate it, but probably no one will die.
Sadly, I can still imagine someone crashing into a static car on the highway. I can imagine normal-speed continuation, mid-lane stopping, and attempting to navigate to the side of the road all seem to have some pretty heavy risks. I do wonder if a low-speed continuation would be in any way suitable.
A "safe stop on the high way" means the side of the road in normal everyday context. That would be appropriate here, no? Highway patrol would check out the car pretty quickly.
The basic idea is that the switch can only be maintained in a non-braking state by an alert, not-incapacitated person. If you press too hard, it brakes; if you don't press hard enough, it brakes. You have to apply just enough force, and that's something only an alert person can do.
Clearly, using the gas pedal to turn off auto-pilot is not good enough.
By the way, we now have a pilot program in Hong Kong where cameras are installed in buses and use computer vision to detect if a bus driver is dozing off. The system then automatically makes the driver's seat vibrate and sound an alarm to wake the driver up.
How do you expect a driver to maintain a dead man's switch the entire time they are driving in manual mode?
Or conversely, suppose the switch must be maintained while the car is in autonomous mode, should the car pull over and stop on the side of the freeway? In terms of user experience I would expect the car to continue to the destination. The commenter discussing the use of sensors to detect the state of the driver and act accordingly makes more sense.
> In terms of user experience I would expect the car to continue to the destination.
I ultimately hope this is a question answered by legal standards rather than by "user experience." If we're requiring drivers to be present even in autonomous cars, then they need to pull over and possibly take other alerting actions.
> The commenter discussing the use of sensors to detect the state of the driver and act accordingly makes more sense.
Are they asleep or merely in a diabetic coma? Can we tell? If we can't, should we even try this?
Ultimately, I'm really concerned about this line: "Improvements in this case meant altering night-shift protocol to have two safety drivers instead of one."
Humans may be bad drivers, I'd say a good look at the data would suggest otherwise; even so, the lesson for Waymo should be "Humans are also bad engineers."
> Clearly, using the gas pedal to turn off auto-pilot is not good enough.
It's a trade-off: if the auto-pilot is doing something unsafe, it is better to have the human intervene immediately (this might mean stepping hard on the gas-pedal, or braking, or turning, depending on the scenario, which is impossible to know in advance). The assumption is that the software/data are not perfect (yet) and the human knows what's best. I don't think "don't trust the human test-driver" is one of the current parameters, especially since that is their literal job.
I feel like an old man shaking my first at the slow march of inevitable progress - but - this is exactly why 'drive assist' or any other flavor of automated driving that is not 100% automation is not useful to me. If I have to pay attention and babysit piloting the vehicle, I might as well be driving (assuming I'm of sound enough mind and body to actually drive).
I disagree it was a "joke"; I'd say it was a concept car - their vision of the future. They need steering wheels because they're still testing them, but the Firefly is the goal.
they made a few then had to install wheels after it was pointed out it was illegal in CA to drive without them- IE, lack of basic due diligence. And they've reverted to conventional cars as a platform because that makes more economic sense. Fine to have a goal but if you never get there and make basic bungles on the way, it's a joke.
How do you feel about cruise control? Or automatic transmissions?
I like semi-autonomous systems because they still take some of the cognitive load off of driving. Paying attention is still easier than paying attention plus constantly having to adjust speed/steering/etc.
> How do you feel about cruise control? Or automatic transmissions?
Cruise control is great. I still control where the car goes though. I still have to be 100% attentive to what the vehicle is doing. At no point am I relying on the vehicle to keep me safe.
Automatic transmissions are great too, except for very specific use cases.
> I like semi-autonomous systems because they still take some of the cognitive load off of driving. Paying attention is still easier than paying attention plus constantly having to adjust speed/steering/etc.
I think I would be more open to 'smart' information systems - beeps when leaving a lane, back up cameras, etc. Anything that takes piloting control is a step too far for me at this time. Cruise control is largely proven to work. I never worry about it failing when I use it.
I like the idea of emergency braking as one last backstop. It can react faster than I can, and if prevents something I’m happy. It’s not like I’ve given up the brake pedal to it.
Lane assist is similar. It’s very useful in high crosswinds, but it’s reliable enough even on well painted roads that you get used to it and end up relying on it. It feels a bit like power steering: makes the job easier but you never stop doing the job.
> I like the idea of emergency braking as one last backstop. It can react faster than I can, and if prevents something I’m happy. It’s not like I’ve given up the brake pedal to it.
This is an example of something I might be open to, but I guess because I write software I have a million 'what if' nags worrying about edge cases.
> Lane assist is similar. It’s very useful in high crosswinds, but it’s reliable enough even on well painted roads that you get used to it and end up relying on it. It feels a bit like power steering: makes the job easier but you never stop doing the job.
On this I disagree. If you're not controlling the vehicle enough to keep in your lane, you're not paying enough attention - except in the high crosswinds scenario. I don't drive in high crosswinds, so I guess that is not a use case I am familiar with.
It still feels like you’re keeping it in the lane, just doesn’t feel like it takes as much force as it might otherwise. The curves may seem gentler.
Like many of these it requires active input. It will turn itself off (with a nice audible and flashing warning) if you don’t keep doing your part. You’re not going to get more than 30-45s of ‘I don’t have to do anything’ before it turns off and it’s all on you again.
I am not so sure about that statement. On a recent trip I passed through Cincinnati and I let my car; model 3; do both the speed and steering. I set a respectable follow distance which did of course encourage a few of the more aggressive drivers to dart in front of me. It never put me in a situation I felt I was at risk and more than once braked for something far quicker than I caught on.
as for the steering and lane keeping, even at V8 it was perfectly fine across hundred mile stretches of interstate. where it had difficulty is when the right line vanished for splitting highways or off ramps. still it recovered to the left before fully going to the right. Not 100% but I bet over 90% of my 600 mile commute North was with both traffic aware cruise control and auto steering.
What took me the most to get used to was TACC which many cars have and have had for years. Once I learned it really did stop and move correctly everything fell into place. The steering was my least concern and I quickly understood its limitations. The payoff was, realizing I was drowsy by seeing something in the rear view mirror I did not recall passing and the car kept me and others safe. Needless to say that was a wake up call to take a break.
Is it perfect, no. However I am under no illusion that I will be able to hop in a FSD car of any brand in bad weather and have it take me to the hospital within the next five years and maybe even ten. I really want to see how they overcome snow and near white out conditions that I have driven because at the time I had to
That's tricky. So you need to be able to take control of the car quickly in an emergency, but at the same time you want the human pseudodriver to be free to be distracted or drowsy, and so in a state where they could accidentally hit this button.
That would be classified as "Level 3" autonomous, aka "Driver must be available to take over controls". Waymo has publicly stated that it feels that Level 3 autonomous is unsafe and that it's goal is Level 4 "Driver not expected to take control".
Waymo currently has Level 3 cars, which is why they employ safety drivers and haven't made their cars publicly available.
They had the little bubble cars running around for a while that didn't appear to have a steering wheel but I believe they were controllable via a joystick.
Not in all cases perhaps though to mimic real-life usage, as then this scenario may not have been uncovered until mass adoption; I could understand a 'real test driver' and a passive/non-engaged safety person in passenger seat - what can they do but wake a driver who falls asleep though, and perhaps then wake them before they accidentally engage manual mode while asleep?
They're not supposed to mimic real-life; Waymo's plan is to not have drivers at all.
A passenger can hold the steering wheel until the driver comes to. Or they can even use a car with two sets of pedals, they're common in driving schools.
>> At a company meeting to discuss the incident, one attendee reportedly asked whether safety drivers were on the road too long, and was told that drivers can take a break whenever they need to.
I hope they also have reasonable mandatory breaks. It would seem that there is pressure to stay on the road as long as your peers. Which could lead to drivers not taking breaks when they should.
"whenever they need to" for breaks usually translates to, "whenever they need to find another job". HR/PR might say it, and they might even believe it, but the company will incentivized management for output. Those managers are going to be pushing their employees to do as much as possible with as little breaks as possible.
Unless they set up a completely separate chain of command that is in charge of validating if an employee is legitimately being fired, then it's always going to end up following their incentive structure
While no serious injuries here make it easier to be objective about it, I think it's actually good to see one of the suspicions about self-driving cars actually borne out in real life testing: that this is an area where progression/safety isn't a smooth linear progression but more of a step function or something with a valley before rising. That means that Waymo's approach is in fact the best one: either stick to level 1, maybe a bit of level 2, or bite the bullet and go directly to 5 and figure out how to bring statistical rates there combined with the positive utility function to a point where they're flat out superior to human drivers, and then sell that. This is actually kind of unusual for a lot of the history of tech I think, where the (not unwarranted) common wisdom tends to be more that gradual improvement and live testing and maturing over years is more effective then aiming for big leaps all at once. But there are exceptions that prove rules and this seems to be one of them: that overall safety is a combined function of human driver quality to car AI quality, but they're non-linearly inversely linked in that the more the AI takes over the more the human loses ability to stay focus and practiced, and furthermore has to deal with context shifts which are inherently slow and would inherently be required exactly when it's most serious. So it's a lot more all or nothing.
None of that should take away from the value of pursuing it however. Humans are in fact bad drivers, and have large numbers of failure modes like getting inebriated or sleepy or angry that a system can simply avoid entirely. But arguably even more important then safety, which is what tends to come up most, is utility (after all we put up with vastly less safe cars and driver practices for decades and decades before the present time already). Personal arbitrary-point-to-point mechanized transportation is effectively required for full participation in much of the modern world, it's just the danger and expense that makes it a quasi-right rather then a full one. The young and elderly being unable to take advantage is already a suboptimal state of affairs, and it's frankly a waste of everyone's valuable and limited human brain time that they're required to act as uncreative biocomputers in that role. It's all been worth it anyway because it's just that valuable, but removing a human having a required role is really just fulfilling what the tech should have been like from the beginning if our synthetic mechanical development didn't far outstrip our synthetic information collection and processing development. A certain amount of ongoing casualties and damage is entirely acceptable to achieve a societal improvement of that level of value, even if it wasn't much safer at all.
Woof. That's one way to do it. The way that'll get you either unelected from whatever municipal elected power position you have or fired from whatever unelected position, but one way.
apropos, Tim Harford is an economist and an author of multiple exceptional books on economics targeted to the general public, not a reporter. He manages to show how deeply fascinating economics is and that economic phenomena are part of the fabric of not just our societies, but the very natural world. I highly recommend his "Undercover Economist" and other books.
This instantly evokes ideas about distinct technologies from different sources coming together to form a greater whole:
Things like the Apple Watch monitoring your awareness and activity level, and sending it to the car which uses that information to influence how it drives, or when it vibrates the seats to keep you awake or wake you up, and so on.
Given that this was a crash with no injuries, I think Waymo's severity-weighted crashs per mile is better than a typical human's. Anyone know the numbers on this?
No, that's not what it seems like. This news was broken by Amir Efrati who writes for The Information. He tends to sensationalize his stories, which I don't like, but his facts check out. He has sources inside Waymo and has been reporting stuff like this for a while. He's more than happy to bash Uber as well when he gets the chance.
Not sure it's in Uber's interest to play up the dangers of self-driving cars in general. Nobody's gonna look at this story and think "I guess Uber's not that bad after all!"
It's so sad how these companies are spending billions on developing this technology and then still cheap out on the test drivers by having them drive solo and for too long hours (and presumably at minimum wage)
The lack of requirement for driver input is what causes the operator to drift off or fall asleep. The same reason they put test/dummy positive weapon images that aren't physically present in the xray image stream at security checkpoints; to keep the operator alert.
If you have a human overseer in the seat, you want to find ways to continue to keep them engaged even if it isn't necessary for the vehicle's autonomy.
Well the driver did provide input, he touched the gas pedal, which disengaged the autonomous system and put the vehicle in manual mode, and that's presumably what you want to have happen.
When it's time to disengage the vehicle you don't want to be fighting for control with the computer, manual override needs to be on a hair trigger in case the computer makes a sudden error. Unfortunately so long as there's a human in the loop, the propensity for human error is going to be there. I honestly don't know what Waymo could do to prevent this, multiple bells and whistles went off.
Waymo takes this stuff very seriously, they absolutely do not want to have to face the music should one of their vehicles get into an at-fault accident or suffer some other kind of catastrophic failure.
I just did a Google News search for "asleep at the wheel", and in the last week a few people have been killed, a few more hospitalized, and the more minor incidents don't even get reported. Waymo sincerely wants to solve this, but there's an awkward no-mans land between no autonomy and fully autonomous where Vigilance Decrement[1] is an issue.
Fortunately this was a minor accident. Waymo's test fleet reportedly covers 25,000 miles a day, and in regular day to day driving minor accidents happen about once every 160,000 miles. So if incidents like this are happening once a week for Waymo, that's about par for the course.
Article implies without stating outright that the driver was working a night shift, which are problematic for sleep schedules across a wide range of industries. People fall asleep at night, even when they are expected by their bosses to be alert. Waymo responded by walking back their cut of safety drivers from 2 to 1, but only for night shifts.
If you tell someone that they will be fired the literal instant they doze off, all you are doing is guaranteeing that they will get fired, especially when you also task them with a boring job and ban all means by which they might boost their orexin levels during their shift. That is one of the hormones that regulate alertness, and responds to mental engagement. It cycles up and down with the circadian period. The result: bored at night; out like a light. The occupants of a self-driving vehicle will fall asleep at night. If you aren't testing that use case, you aren't doing your job.
Managing alertness levels at night is the company's problem. Bright lighting with plenty of blue light in the spectrum is part of it. Keeping the employees from getting bored is, too. Sleep countermeasures will have to be installed with user-monitored car automation, and the safety-driver/tester just saved Waymo from a future liability nightmare. Why are they no longer with the company?!
I think one solution may be to gamify safety-driving such that the human safety for an autonomous driver earns points for vigilance-related tasks. Identify potential road hazards before the autonomous system does. Count blue cars. Play the 49-state license plate game. Play a trivia game over the audio system. Hear about local points of historical interest near where you are driving. Anything other than silently watching moving road lines extend beyond the range of your headlights.
Which highlights the issue that if there isn't some level of engagement required (or some level of fear/stress that it might be required) your mind/body can reach a restful enough state to fall asleep; in this case of course too, the test driver may not have taken care of themselves properly, had proper sleep, or many other possibilities - which I wonder if we'll get a truthful account of (whether due to Waymo and/or the driver not being straightforward).
It's an interesting edge case that you might not think of, or thinking the odds might be "one in a million" and not design for it; I wonder how many driving hours etc have occurred so far (and other statistical analysis to breakdown different contexts) before this "one in a million" event occurred.
More than that, the ability to sleep while going from place to place is a feature. I'm looking forward to going to "grandma's" leaving at bedtime, and whereever we wake up telling the car to stop, and spend the morning exploring some random town in the middle of nowhere. Then a short nap after lunch while going to some other random town in the middle on nowhere for the afternoon, then waking up for breakfast with grandma.
Nope. Waymo's cars don't have passengers in reach of controls. The Firefly didn't even have pedals or a steering wheel, and on the Pacificas they sit in the back, I believe.
At this stage in the game, these prototype vehicles should absolutely detect whether or not their operators are paying attention, since they’re part of the safety system of the vehicle.
Also, did they not learn from Uber’s fatal crash? A sole operator in these vehicles is a terrible idea. One person to monitor road conditions and one person to operate any computer equipment should be the standard until we hit level IV or V autonomy.
I'm being serious when I ask. I question weather people can safely use these vehicles in these modes where they're supposed to not interact ... but pay close attention at the same time.
I think it is going to have to be full autonomy, or not at all.
I question weather people can safely use these vehicles in these modes
Of course they can't. Watch people who've been looking at their phone while the light's red. Watch how they drive after it turns green. Some of them drive away like they're drunk, lurching and weaving, like they're reorienting themselves to the driving task. On the rare occasions I get even slightly engaged with the phone at a light, it's almost like I have to "reboot" back to driving mode when the light goes green.
The part of the "throwing it in the human's lap at the last second" that bothers me is that a lot of context gets missed when it's time to grab the wheel. At least for me, I get context a little bit at a time. That guy in the left lane is looking at his phone more than the road...<a few seconds later>...big truck's going to want in this lane when his ends, leave room...<more seconds>...I see taillights 12 cars ahead, what's going? And Level 3 is going to just hand me the wheel without me knowing any of that? Yeah, I see why Waymo is shooting for Level 4.
You also miss the interaction between humans. If another driver and I look at each other, we're at least aware of our existence together and I know what is up.
If a driver doesn't look at me ... does he know? does his car know? If he is suddenly given the wheel ... I don't think humans can handle that.
There's a difference between usage while testing and usage in the final product. While they're testing the system, they should probably have two operators paying attention.
This is a prototype vehicle. Safety controls must be implemented differently than a production vehicle that has passed some yet unspecified certification process.
Complaining about a prototype having different UX than a production vehicle is like complaining about a racing car having different seatbelts than a road car. Different design requirements for different usages.
I'm not sure what distinction you're drawing here.
It's a prototype vehicle, where the self driving features are experimental and not certified by a third party. Extra safety systems are necessary that might not be reasonable once the tech is polished and regulated.
I'm saying any self driving where the human has to monitor how the computer is doing and be ready to take over is probabbly a non starter as human nature likely dictates that they will probabbly not do a worse job at that than just driving.
It was in manual mode. Should it forcibly take away control from the user that gave it a direct command, because it "thinks" they're not paying attention?
Letting a developer drop the production database[1] is not good either, but trusting a person to do their job correctly is baked into every organization and level. Still, accidents happen - there is no fool-proof way to stop humans from behaviours that are ultimately harmful. Elaborate systems will have elaborate loopholes.
For certain you cannot prevent all accidents, and I am not trying to say that we can/should.
I think having a sole operator in a prototype vehicle drastically increases the odds of these accidents, and that companies like Waymo should run them with two operators and some drowsiness detection systems (which are common on luxury cars). That’s a relatively simple fix.
It’s important that testing with multi ton moving vehicles be handled with far more cars than most production databases are, speaking to your metaphor. The consequences of mistakes is much much higher with cars than it is with most databases.
It never should have allowed the vehicle to proceed with a fatigued driver, and the company should have policies in place to prevent this (maxmimum hours, dual operators, etc.)
Agreed. Long term this should work similar to autopilots in planes. They make a loud noise when disengaged so it is very clear when it stops working for whatever reason. Some modern planes actually take control and engage the autopilot when things get out of hand to recover from dangerous situations.
It seems to me that a self driving car is preferable to having a driver that is asleep, zoned out or dangerously distracted by whatever is happening on their phone or in the care. Many accidents have that as the root cause.
That is completely false. Any decent camera or eyesight from the car's point of view would have seen the pedestrian well in advance enough to avoid that crash. Uber's fatal crash was caused by the car being programmed to ignore stationary/slow moving objects in the way.
Not quite. The camera only systems were confused, but the lidar systems were completely certain that a collision was iminent with enough time to prevent the collision or at least reduce injuries.
Unfortunately the lidar did not have control of the system, did not have override capability, and could not notify the driver in an emergency in any way. It was effectively disabled from the perspective of vehicle control and safety.