Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The comments are a pile-on of criticism and how can this be legal. The question is what percentage of self driving hours lead to a situation like this, is it high enough to determine it's unsafe compared to non autopilot or statistically an outlier?


Someone on here[1] already compiled a list of 7 incidents in this video of a ~30 minute drive. 3 of these look like they could have resulted in a severe crash or even a fatality if the driver did not intervene (the train, proceeding into an intersection on a solid red light, ignoring pedestrians). That's a pretty bad batting average - most people don't nearly crash 3 times in half an hour of driving.

[1] https://news.ycombinator.com/item?id=31977352


If I understand you correctly, you're arguing that having machines making mistakes maybe ok if it's statistically less likely to happen than humans?


I'm asking the question of how common it is. If I understand you correctly, self-driving should be banned if there's 1 error?


It should be banned without the ability to know ahead of time it is trustworthy within it's intended operating envelope. Said operating envelope should be clearly, and unambiguously explicable to a lay person.

Neither of those conditions are the case. I'd appreciate it if people would stop trying to use statistics to gove an unproven system being tested with a fundamentally unsafe methodology with minimal experimental controls a free pass.


No, I was just trying to work out what you were saying.

I do however think that it's wrong, why? Because human error is enough to deal with, couple this with now AI error. It makes society, crash investigation, designing roads way way more complicated.

Maybe if we ban human drivers 100% and optimize for cars then it would be a good idea. I just can't see cars not needing supervision for a very long time.

I doubt people are going to buy cars that might kill them by making "mistakes". We will see but I think it's going to be a hard sell. People can easily accept their own mistakes. A computer though?


We actually do that with many industries where such errors result in loss of human life. Entire plane fleets being grounded and things like that.


They waited until two crashed before they grounded 737MAX. Two too many. They already knew.


"Something like this is already being done" is not much of an argument. People do a lot of stupid things.


There’s a built in autoban feature, one head on collision with a train activates it.

Unfortunately it sucks for all the innocent people on the train.


Just watch the video… he had to intervene several times in the same single drive.


It doesn't matter how statistically safe it is. Putting unreliable software like that in the streets causes more crashes than drivers. The sudden swervings, the phantom braking, the absolutely batshit insane behavior and the frankly dangerous "lidar is for scrubs we're only going to use cameras" behaviour and Tesla having autopilot turn off ever so slightly before a crash so that their logs can say that it wasn't on are so many reasons that it should be off the roads and Tesla fined so hard.


It does matter how statistically safe it is.

It’s possible for it to be not 100% reliable and still safer than many drivers who seem to be on their phone much of the time where I live.


It doesn't matter because of the ways it behaves on that small percentage where it behaves like a blue collar on a cocaïne bender on a Friday night. It doesn't slam itself into a tree like a drunk driver in the middle of the night, it is so unpredictably bad that i half expect to see reports telling me it tried to take a train line because it's a shortcut.


Humans are the same, still not sure why total safety statistics wouldn’t matter…




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: