Hacker News new | past | comments | ask | show | jobs | submit login

Tesla can only really gather depth data from there radars, while the cameras operate a DNN to detect features and make adjustments for steering and speed based on fusing those two pieces of data together. An error in the radar or the DNN detecting features incorrectly because of lighting, road color change or object on the road which the model was not trained on, can cause problems like this.



"Tesla can only really gather depth data from there radars" - they can also do it visually through depth from motion


if it came back after an update, wouldn't it be the issue of the algorithm training itself incorrectly?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: