Some of the comments in this thread are ridiculous, and your comment throws them into sharp relief. Even if the video is a perfectly faithful reproduction of the scene and the dark parts are that dark, why in the world would the software assume a space it has next to no information about is safe to drive into at the current speed?
The car is equipped with more than just an RGB camera sensor wise, such as the Velodyne LiDAR, so it does have information about the scene despite being pitch black to the naked eye.
Yeah, my point is that the RGB video could be a perfectly faithful reproduction of the scene* and also the software could have reached decision from multi-sensor data that hasn't yet been released (i.e., a dark spot in some video frames from one of the RGB cameras doesn't imply that the autonomous system lacked info).
*Though we can assume that this dashcam footage is lower quality / resolution / less precise than what the sensors had access to.