Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The cost discussion on LIDAR always confused a layman like me. How much more expensive is it that it seemed like such a splurge? LIDAR seems to be the only thing that could make sense to me. The fact Tesla does it with only cameras (please correct me understanding if I'm wrong) never made sense to me. The benefits of LIDAR seem huge and I'd assume they'd just become more cost effective over time if the tech became more high in demand.

I'm _way_ out of my depth though.



> How much more expensive is it that it seemed like such a splurge?

LiDARs at the time Tesla decided against them were $75k per unit. Currently they are $9,300 per car with some promising innovations around solid state LiDAR which could push per-unit down to hundreds of dollars.

Tesla went consumer first so at the time, a car would've likely cost $200k+ so it makes sense why they didn't integrate it. I believe their idea was to kick off a flywheel effect on training data.


Holy - okay never mind, I didn't realize just how expensive LiDAR was...


Lidar will continue to get cheaper, but it has fundamental features that limit how cheap it can get that passive vision does not.

You’re sending your own illumination energy into the environment. This has to be large enough that you can detect the small fraction of it that is reflected back at your sensor, while not being hazardous to anything it hits, notably eyeballs, but also other lidar sensors and cameras around you. To see far down the road, you have to put out quite a lot of energy.

Also, lidar data is not magic: it has its own issues and techniques to master. Since you need vision as well, you have at least two long range sensor technologies to get your head around. Plus the very real issue of how to handle their apparent disagreements.

The evidence from human drivers is that you don’t absolutely need an active illumination sensor to be as good as a human.

The decision to skip LiDAR is based on managing complexity as well as cost, both of which could reduce risk getting to market.

That’s the argument. I don’t know who is right. Waymo has fielded taxis, while Tesla is driving more but easier autonomous miles.

The acid test: I don’t use the partial autonomy in my Tesla today.


Does the "sensor fusion" argument that Tesla made against LiDAR make as much sense now that everyone is basically just plugging all the sensor data into a large NN model?


It's still a problem conceptually, but in practice now that it's end to end ML, plug'n'pray, I guess it's an empirical question. Which gives one the willies a bit.

It'll always be a challenge to get ground truth training data from the real world, since you can't know for sure what was really out there causing the disagreeing sensor readings. Synthetic data addresses this, but requires good error models for both modalities.

On the latter, an interesting approach that has been explored a little is to SOAK your synthetic sensor training data in noise so that the details you get wrong in your sensor model are washed out by the grunge you impose, and only the deep regularities shine through. Avoids overfitting to the sim. This is Jakobi's 'Radical Envelope of Noise Hypothesis' [1], a lovely idea since it means you might be able to write a cheap and cheerful sim that does better than a 'good' one. Always enjoyed that.

[1] https://www.sussex.ac.uk/informatics/cogslib/reports/csrp/cs...


now that it's end to end ML, plug'n'pray, I guess it's an empirical question

Aren't human drivers the same empirical question?

That paper is really interesting, thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: