Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What sparks your skepticism of self-driving cars? Although some companies use stereo vision to generate point clouds, others use lidar.


A lot of things. One is the "AI" which isn't so much "I" and quite error prone and hard to impossible to analyze in detail and/or debug. The idea that bad people (be it trolls, criminals or spooks) could force deliberate malfunctioning of/misclassifications in AIs and thus cause crashes is off-putting, on top of the general "normal" errors you can expect.

Then the business/political aspects of it, like Tesla demanding somebody who bought a used car pay again for Autopilot.

We already saw crashes by Autopilot users not paying any attention whatsoever (granted AP isn't fully "self-driving", but still).

On top of that, just like with better car safety and even with the introduction safety belt laws, we saw a stark uptick in accidents, that usually affected people outside the car the most, such as pedestrians and bikers. So me being a pedestrian quite often, I dread in particular the semi-self-driving/assisted driving car tech like autopilot, and have a good skepticism when people tell me that the (almost) perfect fully self-driving cars are just around the corner. If my skepticism turns out to be unwarranted, great.

And this tech will keep many consumer cars around longer, in disfavor of public transportation. The one good-ish thing that came out of SARS-CoV-2 is the reduction in air pollution (I am not saying it is a net positive because of that, far from it). The air smells noticeably nicer around here and the noise is also down.


> The idea that bad people (be it trolls, criminals or spooks) could force deliberate malfunctioning of/misclassifications in AIs and thus cause crashes

I wish people would stop trotting this one out. Bad actors can deliberately cause humans to crash just as easily if not moreso. If they don't, it's only because such behavior is punishable.


Making somebody crash in a dumb car is pretty hard if you want to do it in an undetectable manner with minor to no risk to yourself or anybody else.

Glitching an AI on the other hand e.g. by holding up a sign is less risky for yourself, and less detectable.


> Making somebody crash in a dumb car is pretty hard

That's not true even allowing for your next constraints, one of which I find to be quite absurd.

In the advanced technological case, you have https://www.theverge.com/2015/7/21/9009213/chrysler-uconnect...

In the non-advanced technological case, you can drop caltrops behind your vehicle as you drive and no one would know it was you.

"But that only happens in cartoons" - Yes, because most people are not cartoon villains. And yet, look, kids throwing rocks, no AI necessary: https://en.wikipedia.org/wiki/2017_Interstate_75_rock-throwi...

> with minor to no risk to ... anybody else

Ah, yes, the ethical murderer who only wants to fuck up just that one car but who sincerely worries about the other drivers on the road. That's the demographic you're concerned about? So how does indiscriminately trying to trick generally available systems specifically target only one person without risking other drivers?


If you're interested in replying in a condescending manner and attacking strawmen arguments I never made, be my guest, but I have no desire to further discuss this with you.


> I have no desire to further discuss this with you

I'll just talk to myself then, because, while I understand you feeling hurt by my comment, I did not attack a strawman.

> Making somebody crash in a dumb car is pretty hard...

Not true. (I gave examples.)

> ...if you want to do it in an undetectable manner...

Still not true. (Same examples.)

> ...with minor to no risk to yourself...

Still not true. (Same examples.)

> ...or anybody else.

Still not true. (This is absurd. Also the same examples still apply.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: