Maybe this suggests that an effective use of current (i.e., 99.9%) autonomous driving implementations would be a captain and co-captain type of scenario, where the car only responds to commands that are the same from both drivers, like a logical AND between driver inputs (within some tolerance, and filtered, etc.).
What if the promise of autonomous driving tech isn't to make human drivers obsolete, but to make human drivers nearly perfectly safe. What are the chances that an engaged human driver and an autonomous system would both, simultaneously give the same incorrect or dangerous driver input to a car? It's not as sexy as robot cars, but seems like a significant development over the current situation. Also, it seems like a viable way to test on public roads.
What do you do in the case where the driver and the computer disagree? Say there is an object directly in front of the car - driver turns left to avoid, computer turns right to avoid. You have to take SOME action -- you can't do nothing otherwise you hit the obstacle. If you go with the driver, what is the point of the computer if it can't overrule bad decisions? That's equivalent to just letting the driver drive. If you go with the computer, what is the point of the driver? Only taking actions due to an AND of driver and computer inputs seems to only work if there is one correct course of action, which I doubt is the case in many situations.
Maybe you could have a system where the computer only takes actions to deliberately prevent unsafe situations, and is conservative in doing so (i.e. it doesn't drive, per se, but brings the car to a safe stop or enforces a maximum speed), but that's a huge step down from the current goals.
Haha, yeah, good point. In the event of contrary inputs, I guess one does have to supersede, e.g., one is the 'pilot' and one is the 'co-pilot.' Your proposal is much better thought out. But then I think we've described the collision avoidance systems that are in production today. Take it any further and you run into the disengaged driver situation.
What if the promise of autonomous driving tech isn't to make human drivers obsolete, but to make human drivers nearly perfectly safe. What are the chances that an engaged human driver and an autonomous system would both, simultaneously give the same incorrect or dangerous driver input to a car? It's not as sexy as robot cars, but seems like a significant development over the current situation. Also, it seems like a viable way to test on public roads.