Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of the comments in this thread are ridiculous, and your comment throws them into sharp relief. Even if the video is a perfectly faithful reproduction of the scene and the dark parts are that dark, why in the world would the software assume a space it has next to no information about is safe to drive into at the current speed?


> a space it has next to no information about

The car is equipped with more than just an RGB camera sensor wise, such as the Velodyne LiDAR, so it does have information about the scene despite being pitch black to the naked eye.

http://www.businessinsider.com/uber-custom-lidar-tech-not-re...

http://velodynelidar.com/hdl-64e.html

https://techcrunch.com/2016/04/11/ford-lidar-autonomous-car/


> even if

That's why I specified. In any case thanks for clarifying.


Yeah, my point is that the RGB video could be a perfectly faithful reproduction of the scene* and also the software could have reached decision from multi-sensor data that hasn't yet been released (i.e., a dark spot in some video frames from one of the RGB cameras doesn't imply that the autonomous system lacked info).

*Though we can assume that this dashcam footage is lower quality / resolution / less precise than what the sensors had access to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: