LIDAR also has limitations on angular resolution just as a function of how the sensor works. It's entirely possible that the size of the person/bike on LIDAR was just too small until it was too late to stop.
Why it didn't even appear to try to stop? You got me, refresh rate on the LIDAR? LIDAR flat out being mounted to high and relying on optical sensors instead for collision avoidance of small targets (like a human head)?
I'm guessing, I'd love to see an NTSB report on this.
Why even bother having a LIDAR system on your self driving car if it doesn't have sufficient resolution to detect a person standing right in front of it?
This doesn't seem like an edge case at all. Pedestrian crossing the road at a normal walking pace, and no obstructions in the way which would block the car's vision. The fact that it's dark out should be irrelevant to every sensor on that car other than the cameras.
Something obviously went terribly wrong here; either with the sensors themselves or the software. Probably both.
For detecting larger obstacles like buildings or other vehicles would be my guess.
Realistically faster sensors should be used to detect obstacles. LIDARs I could find with some cursory googling can run up to 15hz. Computer vision systems can run much faster (I have a little JeVois camera that'll do eyeball tracking at 120hz onboard, I assume something that costs more can do better).
But more importantly, you're vastly trivializing the problem - Standing right in front of it, sure the LIDAR will see the person no problem. Standing 110 feet away (which would be min stopping distance at that speed)? Realizing that, for a LIDAR with a 400' range at 15hz moving at 40mph you get ~7 samples of a point before you're at it... For at least the first 3 frames that person is going to look like sensor noise. At 110 feet that person (which I'm calling a 2' wide target) is 1 degree of your sensor measurement.
It's not that it's useless or broken, more just this a seriously bad case where optical tracking couldn't work and where LIDAR is particularly ineffective at seeing the person because of how it works. More effective might be dedicated time of flight sensors in the front bumpers, unsure how long a range those can get, but they are also relatively "slow" sensors.
It’s not mutually exclusive either. You can have lower frequency, lower angular res 360 spinning LIDAR for low granularity general perception, and also have much higher frequency, brighter, and lower FOV (~90-120deg) solid state lidar mounted at the very least on the front corners of the car. We should be absolutely littering these vehicles with sensors, there’s no reason to be conservative at this stage.
> LIDAR also has limitations on angular resolution just as a function of how the sensor works. It's entirely possible that the size of the person/bike on LIDAR was just too small until it was too late to stop.
I highly doubt this is the issue. I am not sure what Ubers setup is, but even a standard velodyne should have been able to pick that up based on angular resolution.
> Realizing that, for a LIDAR with a 400' range at 15hz moving at 40mph you get ~7 samples of a point before you're at it... For at least the first 3 frames that person is going to look like sensor noise. At 110 feet that person (which I'm calling a 2' wide target) is 1 degree of your sensor measurement.
This is based on the velodyne LIDAR specs I could find last night with some quick googling:
- 400' range
- .04 degree angular resolution
- 15hz max update rate
If you have more accurate real world experience with these sensors and can share more accurate performance characteristics I can update.
These calculations were done assuming a vehicle moving at 40 mph. The stopping distance at that speed is about 110ft. I computed the pixel size by assuming 1 measurement = 1 pixel giving me 9000 pixels per 360 degrees.
Thats the one LIDAR Uber seems to have matching pictures.
5Hz - 20Hz full round sampling rate, lets assume 15 Hz.
The resolution in the horizontal plane is dependent on rotational speed, so at 15 Hz it should be 0,26 degrees.
(0,35/20*15 = 0.26)
For the woman height the angular resolution is 0.4 degrees no matter the rotation speed.
Id est, she would have been atleast one pixel wide from 400 feet and about 2 pixels high and growing in size if we assume 2' wide.
(Not counting bike).
I really see no exuse for Uber messing this up that bad. The LIDAR can't have missed a potential "obstacle" when it got closer, even if the car wouldn't classify it as a human.
I was using Rev E because it's the data sheet I had handy. Mostly I was trying to point out that LIDAR is not some magic thing that always sees everything and there's limitations.
There's with your .26 angular resolution @ 15hz. (I just have a spreadsheet that spits all these out for me.)
These are NOT big targets, they could easily have been mistaken for noise and filtered out. All of the LIDAR data I've ever seen has been fairly noisy and did require filtering to get usable information from it. And given the number of frames they get maybe their filtering was just too aggressive.
Yes, I agree with you that we can't assume that the car could have noticed the woman from 120 meters from LIDAR data alone. Maybe with some kind of sensor fusion with IR-cameras.
But, as it got closer and what the computer though was noise was on about the same place a sane obstacle finder should have given a posetive match. Maybe at 30 - 40 m worst case?
At 142 feet the woman probably had (assuming she was 5.5'):
asind(5.5/142) = 2.21* => 2.21/0.4 = 5.5
So between 5 and 6 "scanlines" going from left to right over her.
Assuming she was 2' wide that's 0.8 degrees which would be 2 to 3 pixels in breadth according to your spread sheet.
That's between 10 and 18 pixels (voxels?) that stand out clearly from the flat road around it, exluding the bike.
If you wan't to get an idea of how LIDAR data looks Velodyne has free samples and a viewer for less resolution models.
It pretty hard to identify obstacles far off, but you will still see there is something there. It's especially easy to identify obstacles that are vertical.
As she got closer, she would eventually show up clearly on the LIDAR data. But since the car never slowed down or went left, it didn't notice her at all even at point blanc (or did see her but failed to do anything about it).
A buddy of mine has a lower end LIDAR on a robot, working with them on SLAM on it, trying to get a similar hardware set up locally over the summer. (I have weird hobbies)
Yeah, I'm willing to accept SOMETHING bad happened here, as I said I really just wanna dissuade people from the notion that LIDARs will see all obstacles all of the time. Not going to say the car acted perfectly and it was sensor failure, but definitely willing to say that the LIDAR probably COULD see her but not as well as people would assume.
Really, I think this was a case of the car over driving their effective sensor range, same as what happens when you're on a dark road and a deer runs into the middle of the road, you simply can't react fast enough by the time you realize the danger is there. Computers are fast but they aren't perfect.
What I'd be particularly interested in was if the computer saw her and if it did the calculation - I can't stop safely in this distance, and decided to just hit the obstacle because it was "safer". At that point we start getting into ethics and this problem gets a lot murkier.
The person in that last picture is something like 5 feet from the car which is far to close to be useful at 40MPH. At those speeds what's important is what it sees at 150 and 200 feet and how fast it can refresh.
When your resolution is low enough to not see this they stop calling it LIDAR and start calling it a rangefinder. If this was actually a fundamental limitation of the sensor then that's the crappiest LIDAR unit I've ever heard of. When I first heard about the accident my initial reaction was "this is why vision only systems are inadequate". The fact that it didn't even detect the object at all before it collided with it with lidar, radar, and vision is inexcusable. This could set fully autonomous cars back enough that forget the cost of one life, this delay could kill tens of thousands because of a preventable accident.
Why it didn't even appear to try to stop? You got me, refresh rate on the LIDAR? LIDAR flat out being mounted to high and relying on optical sensors instead for collision avoidance of small targets (like a human head)?
I'm guessing, I'd love to see an NTSB report on this.