1. A driver who is not looking at the road cannot "potentially intervene", and is as good as no driver at all..
2. These companies seem to be doing nothing to make sure that the drivers will pay attention always and is always in a position to intervene. They even seemed to allow smart phone usage while they are in the car.
So, according to them, the human behind the wheel is just a decoy to prevent backlash from officials and the public, so that they can always say, "look, there is a human behind the wheel if something goes wrong"...
Also, even if they implement some measures, they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. A driver who is actively driving the car will notice a lot more stuff than a passenger who is just looking at the road. There is no way to make a human pay that kind of attention with out actually driving the car. So at best, your "driver behind the wheel" is as good as a passive passenger.
And as told before, the companies are not even trying to make sure of that.
I could be wrong, but I believe part of the reason for having a human behind the wheel is that it allows the testing to take place under existing driving laws. At some point prior to an unmanned vehicle being allowed on the road, lawmakers need to have some kind of framework in place to deal with any incidents that arise. With a human behind the wheel, a fully autonomous car is legally no different to cruise control - it's just a driver assist, and the human behind the wheel is still ultimately responsible for whatever the vehicle does.
In that context, the landscape changes significantly - instead of a self driving car that mowed down a pedestrian, we have a driver who was too busy looking at her phone to pay attention to what her vehicle was doing. From the various articles, it seems that she's not an engineer, and is there in effectively the same capacity as any other Uber driver. If that's the case, she's putting far too much trust into an experimental system. I agree that Uber could do more in the way of technological means to ensure the driver is paying attention, but at some point, an adult with a job needs to be responsible for doing that job.
>lawmakers need to have some kind of framework in place to deal with any incidents that arise. With a human behind the wheel..
The framework should have been in place before these vehicles were ever put on the roads. For example, there should have been some formally specified tests for a self driving vehicle before it can be put be on the road, even with a back up driver..
> a fully autonomous car is legally no different to cruise control - it's just a driver assist, and the human behind the wheel is still ultimately responsible for whatever the vehicle does.
Any thing that does not require drivers to keep their hands on the wheel is not a driver assist. It IS the driver. So there should be tests that make sure of the competence of the tech that is in the drivers seat.
>they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. //
I'm certain that if you can design and build a self-driving car that you can design a simplistic human attention monitoring system that will cause the car to pull over if attention level is too low.
Gaze monitoring that checks for looking downwards or away from the carriageway for extended or too often repeated periods wouldd probably be enough.
I imagine the attention of the "vehicle operator" is vital to the proper training of the vehicles -- if they don't see near misses, or failures to slow for potential hazards, or failures to react to other road users then how can the softwares faults be corrected? Do they get a human to review all footage after the drive?
I agree completely. As far as I can tell, the driver did not even have hands on the steering wheel. How hard would it have been to put sensors on the steering wheel to require both hands? They didn't even do that. Although even if they did, I agree with your statement that "[t]here is no way to make a human pay that kind of attention with out actually driving the car."
Not difficult at all, and you can make them keep reasonable attention. Look at the new Cadillac driver assist: sensors in the wheel for hand placement -and- eye tracking. If the driver isn’t watching the road/holding the wheel, they get escalating alarms until the autopilot disengages.
And that’s consumer drive assist tech, not “we are experimenting with full autopilot” tech, where I’d think such safety measures would be even more appropriate.
This is a solvable and solved technical challenge. Uber just didn’t devote any resources to it because they don’t appear to give a shit beyond acquiring a legal fig leaf to shift liability from themselves to an individual.
Frequent, randomly scheduled disengagements should keep the driver quite on edge, preventing them from becoming a passenger. But each and every one of them would create additional risk, so the net improvement might be negative. There is just no way to get this right, except for being reluctant of pushing to scale. With all the hype, wishful thinking and investor pressure, this clearly isn't happening.
I've been thinking about this for the last couple of days, and it's definitely a hard problem -- even with steering wheel sensors and eye tracking, it doesn't stop people zoning out and not being ready to react.
I did wonder if you could require the driver to make control inputs that aren't actually used to control the car but are monitored for being reasonably close to how the computer is controlling the car, and then the automation disengages (with a warning) if the driver is not paying sufficient attention. I then realised that may be _worse_ - in the event of a problem, the driver would have to switch to real inputs that override, which may delay action and not be something they do automatically. It would mean they are paying attention more to see if the automation is making errors where they have more time to react though (e.g. sensor failure that is causing erratic behaviour but not led to an emergency situation).
I wonder if a hybrid approach might be viable -- fake steering is used to ensure that the driver is alert and an active participant, but the driver hitting the brakes immediately takes effect and disengages the automation.
2. These companies seem to be doing nothing to make sure that the drivers will pay attention always and is always in a position to intervene. They even seemed to allow smart phone usage while they are in the car.
So, according to them, the human behind the wheel is just a decoy to prevent backlash from officials and the public, so that they can always say, "look, there is a human behind the wheel if something goes wrong"...
Also, even if they implement some measures, they can only make sure that the driver has eyes on the road. Not that they are actually paying attention. A driver who is actively driving the car will notice a lot more stuff than a passenger who is just looking at the road. There is no way to make a human pay that kind of attention with out actually driving the car. So at best, your "driver behind the wheel" is as good as a passive passenger.
And as told before, the companies are not even trying to make sure of that.