Presumably the algorithm had a pretty good idea of where the lanes were, and if the LIDAR detected a non moving object in an adjacent lane and decided it was fine to ignore it because it presumed it was not going to start moving, that's a pretty broken algorithm.
I don't have the link handy, but I was reading a webpage yesterday (related but not about this crash) which showed Google's self driving car's "view" of a road scene - it's clearly painted different color boxes and identified pedestrians, bicycles, other cars - along with "fences" where it had determined it'd need to slow or stop based on all those objects.
Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public), it was faulty (but being used anyway, either because its self test is also faulty, or because the driuver/company ignored fault warnings) - or _perhaps_ Google's marketing material is faked and _everybodies_ self driving tech is inadequate?
> Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public)
I think this is a very good possibility considering that autonomous vehicles is the goal of the company and they're racing to get to that point before they run out of investment money. They have a lot of incentive to take short cuts or outright lie about their progress.
Looks like a velodyne 64 based laser. It is virtually impossible for those to not be able to see the the bicycle well in advanced. Uber had a serious issue here. Something like:
1. System was off
2. Point clouds were not being registered correctly (at all!)
3. It was actually in manual mode -- safety driver didn't realize or didnt react fast enough.
4. Planning module failed
4. Worst outcome in my opinion: Point cloud registered correctly, obstacle map generated correctly, system was on, planner spit out a path but the path took them through the bicycle.
The LIDAR data look pretty noisy, especially for distant objects. Could not they filter out the pedestrian thinking it is a bush or something like this?
I get your concern, but I would probably reserve the word inadequate. If this is the only situation you have to worry about a self driving care hitting and killing you in, and it's the only know data point at this time, some may consider that much more than adequate.
A website that "does something weird" when you use a single quote in your password... That _could_ be "the only situation you have to worry about". It is _way_ more often a sign of at least the whole category of SQLi bugs, and likely indicative that the devs are not aware of _any_ of the other categories of errors from the OWASP top 10 lists, and you should soon expect to find XSS, CSRF, insecure deserialisation, and pretty much every other common web security error.
If you had to bet on it - would you bet this incident is more likely to be indicative of a "person pushing a bicycle in the dark" bug, or that there's a whole category of "person with an object is not reliably recognised as a person" or "two recognised objects (bicycle and person) not in an expected place or moving in an expected fashion for either of them - gets ignored" bug?
And how much do you want to bet it's all being categorised by machine learning, so the people who built it cant even tell which kind of bug it is, or how it got it wrong, so they'll just add a few hundred bits of video of "people pushing bikes" data to the training set and a dozen or so of them to the testing set and say "we've fixed it!"
If this is the only data point then uber self driving cars are about 50 times more dangerous than average human drivers (see numbers quoted repeatedly elsewhere; uber has driven about 2 megamiles; human average is 100 megamiles between fatalities)
If that's your idea of adequate, you'd be safer just vowing to get drunk every time you drive from now on, since a modest BAC increases accident rates, but not by a factor of FIFTY!
I really don't bundle Tesla in with Waymo, Lyft, Toyota, Uber that are trying to build ground-up self driving cars. Are Tesla actively testing self-driving cars on public roads yet? Are their included sensors even up to the task? I didn't think they even have LiDAR?
True, but this seems to be a simple case of reacting to a person who steps in front of the car. Automatic braking technology exists on even cars that aren't considered "self-driving".
It’s that last possibility that’s horrifying above all others. The backlash either way is going to be terrible, but if these cars are just not up to the task at all, and have driven millions of miles on public roads... people will lose their minds. Self-driving tech will be banned for a very long time, public trust will go with it, and I can’t imagine when it would return.
This is going to sound bad, but I hope this is just Uber’s usual criminal incompetence and dishonesty, and not a broader problem with the technology. Of the possible outcomes, that would be the least awful. If it’s just Uber moving fast and killing someone, they’re done (no loss there), but the underlying technology has a future in our lifetimes. If not...
Waymo actively test edge cases like this both in their test environments in the desert and via simulation, they have teams dedicated to coming up with weird edge situations like this (pushed bicycle) where the system does not respond appropriately so that it can be improved. All of these situations are kept and built up into a suite of regression tests. https://www.theatlantic.com/technology/archive/2017/08/insid...
I don't have the link handy, but I was reading a webpage yesterday (related but not about this crash) which showed Google's self driving car's "view" of a road scene - it's clearly painted different color boxes and identified pedestrians, bicycles, other cars - along with "fences" where it had determined it'd need to slow or stop based on all those objects.
Either Uber's gear is _way_ less sophisticated (to the point of being too dangerous to use in public), it was faulty (but being used anyway, either because its self test is also faulty, or because the driuver/company ignored fault warnings) - or _perhaps_ Google's marketing material is faked and _everybodies_ self driving tech is inadequate?