I noticed that trend in comments, yes. Unfortunatly, the real issue with what Uber or Waymo (or anyone else) are doing it's with the limitations inherent in the technology itself, specifically, machine learning for object recognition and identification and for the learning of complex beheaviours.
The limitation is -it's a bit technical, but basically, in principle, machine learning is possible under certain assumptions, as laid out by Valiant in his PAC-learning paper (A Theory of the Learnable), especially the assumption that a training sample will have the same distribution as unseen data. Under this condition, machine learning can be said to work and we can look at performance metrics and be happy they look good.
Well, except that the real world has no obligation to operate under our experimental assumptions, so once you deploy machine learning systems in the real world, their performance goes down, because you haven't seen nearly enough of the data you really need to see, in the lab.
And, if you attach such assumptions to safety-critical systems, then you're taking an unknown and unquantifiable risk. Or in other words, you're putting peoples' lives in danger.
And that's everyone who uses machine learning to train cars to drive in real-world conditions. Not just Uber.
Yes, but that is not specific to machine learning. Humans have also learned to drive under the assumption that future observations will somewhat be alike what they have seen in the past. And yes, that is putting peoples' lives in danger.
But that has nothing to do with machine learning. It has to do with all control systems, human or machine.
The point is that machine learning algorithms' decisions always have some amount of error, and that this error goes way up in the real world.
The auto-car industry's marketing claims that self-driven vehicles are safer than humans just because computers have faster "reaction times" (they probably mean faster retrieval from memory).
But if your reaction is completely wrong it doesn't matter how fast you react. Reacting very fast with very high error will just cause a very fast accident- and make it harder for puny humans' reflexes to avoid it, to boot.
The limitation is -it's a bit technical, but basically, in principle, machine learning is possible under certain assumptions, as laid out by Valiant in his PAC-learning paper (A Theory of the Learnable), especially the assumption that a training sample will have the same distribution as unseen data. Under this condition, machine learning can be said to work and we can look at performance metrics and be happy they look good.
Well, except that the real world has no obligation to operate under our experimental assumptions, so once you deploy machine learning systems in the real world, their performance goes down, because you haven't seen nearly enough of the data you really need to see, in the lab.
And, if you attach such assumptions to safety-critical systems, then you're taking an unknown and unquantifiable risk. Or in other words, you're putting peoples' lives in danger.
And that's everyone who uses machine learning to train cars to drive in real-world conditions. Not just Uber.