> If uber's software wasn't robust, why "test in production" when production could kill people?
Because it's cheap. And Arizona lawmakers apparently don't do their job of protecting their citizens against a reckless company that is doing the classic "privatize profits, socialize losses" move, with "profits" being the improvements to their so-called self-driving car technology and "losses" being random people endangered and killed during the process of alpha-testing and debugging their technology in this nice testbed we call "city", which conveniently comes complete with irrationally acting humans that you don't even have to pay anything for serving as actors in your life-threatening test scenarios.
Disclaimer: I am playing Devils Advocate and I don't necessarily subscribe to the following argument, but:
Surely it's a question of balancing against the long term benefit from widely adopted autonomous driving?
If self driving cars in their current state are at least close to as safe as human drivers, then you could argue that a short term small increase in casualty rate to help development rate is a reasonable cost. The earlier that proper autonomous driving is widely adopted, the better for overall safety.
More realistically, if we think that current autonomous driving prototypes are approximately as safe as the average human, then it's definitely worthwhile - same casualty rate as current drivers (i.e. no cost), with the promise of a much reduced rate in the future.
Surely "zero accidents" isn't the threshold here (although it should be the goal)? Surely "improvement on current level of safety" is the threshold?
You can make the argument with the long-term benefits. But you cannot make it without proper statistically sound evidence about the CURRENT safety of the system that you intend to test, for the simple reason that the other traffic participants you potentially endanger are not asked if they accept any additional risk that you intend to expose them to. So you really need to be very close to the risk that they're exposed to right now anyway, which is approximately one fatal accident every 80 million miles driven by humans, under ANY AND ALL environmental conditions that people are driving under. That number is statistically sound, and you need to put another number on the other side of the equation that is equally sound and on a similar level. This is currently impossible to do, for the simple fact that no self-driving car manufacturer is even close to having multiple hundreds of millions of miles traveled in self-driving mode in conditions that are close enough to real roads in real cities with real people. Purely digital simulations don't count. What can potentially count in my eyes is real miles with real cars in "stage" environments, such as a copy of a small city, with other traffic participants that deliberately subject the car to difficult situations, erratic actions, et cetera, of which all of them must be okay with their exposure to potentially high-risk situations.
Of course that is absurdly expensive. But it's not impossible, and it's the only acceptable way of developing this high-potential but also highly dangerous technology up to a safety level at which you can legitimately make the argument that you are NOT exposing the public to any kind of unacceptable additional risk when you take the super-convenient and cheap route of using the public infrastructure for your testing. If you can't deal with these costs, just get the fuck out of this market. I'm also incapable of entering the pharmaceuticals development market, because even if I knew how to mix a promising new drug, I would not have the financial resources to pay for the extensive animal and clinical testing procedures necessary to get this drug safe enough for selling it to real humans. Or can I also just make the argument of "hey, it's for the good of humanity, it'll save lives in the long run and I gave it to my guinea pig which didn't die immediately, so statistically it's totally safe!" when I am caught mixing the drug into the dishes of random guests of a restaurant?
It's an n of 1, but we're nowhere close to 'human driver' levels of safe.
Humans get 1 death per 100 million miles.
Waymo/Uber/Cruise have <10 million miles between them. So currently they're 10 times more deadly. While you obviously can't extrapolate like that, it's still damning.
If you consider just Uber, they have somewhere between 2 and 3 million miles, suggesting a 40x more deadly rate. I think it's fair to consider them separately as my intuition is that the other systems are much better, but this may be terribly misguided.
This is a huge deal.
I honestly never thought we'd see such an abject failure of such systems on such an easy task. I knew there would be edge cases and growing pains, but 'pedestrian crossing the empty road ahead' should be the very first thing these systems are capable of identifying. The bare minimum.
This crash is going to result in regulation, and that's going to slow development, but it's still going to be justified.
I have the same questions as well. But my best guess is that they probably have permission to drive at non-highway speeds at late nights/early mornings (which is when this accident occurred, at 10 PM).
>The Volvo was travelling at 38 mph, a speed from which it should have been easily able to stop in no more than 60-70 feet. At least it should have been able to steer around Herzberg to the left without hitting her.
As far as why test this, I'm guessing peer pressure(?). Waymo is way ahead in this race and Uber probably doesn't wanna feel left out, maybe?
Once again, all of these are speculations. Let's see what NTSB says in the near future.
I live here and they drive around at all times of the day and don't seem to have any limitations. They've been extremely prevalent and increasing in frequency over the past year. In fact, it's unusual _not_ to see them on my morning commute.
> At least it should have been able to steer around Herzberg to the left without hitting her.
Does the car have immediate 360 degrees perception? A human would have to look in one or two rear view mirrors before steering around a bike, or possibly put himself and others in an even worse situation.
If you're about to hit a pedestrian and your only option is to swerve, then you swerve. What could you possibly see in the rear view mirror that would change your reaction from "I'm gonna try to swerve around that pedestrian" to "I'm gonna run that pedestrian over"? Another car? Then you're going to take your chance and will turn in front of that car! The chance that people will survive the resulting crash are way higher than the survival rate of a pedestrian being hit at highway speeds.
You should always be aware when driving of where your "exits" are. This is not hard to do. Especially at 38 MPH, you can be extremely confident there are no bikes to your left if you have not passed any in the past couple seconds. And, lanes are generally large enough in the US that you can swerve partway into one even if there are cars there.
If everybody is driving in the same speed on all lanes, which is not unlikely on that kind of road, I generally am not confident that I can swerve into another lane _and slam the brakes_ without being hit. If I am hit, the resulting impact speed with the bike could be even worse than if I just slammed the brakes, so I don't think it's really a given.
You also cannot decide in 1 second what would happen if the pedestrian were to freeze, and whether you'd end up hitting him/her even worse by swerving left.
Most people in that situation would just brake, I think.
Other self-driving car companies (like Google (or whatever they renamed it)) have put a lot more work into their systems and done a much greater degree of due diligence in proving their systems are safe enough to drive on public roads. Uber has not, which is why they've been kicked out of several cities where they were trying to run tests. But Tempe and Arizona is practically a lawless wasteland in this regard and is willing to let Uber run amok on their roads in the hopes that it'll help out the city financially somehow.
If uber's software wasn't robust, why "test in production" when production could kill people?