Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is so strange that people don't see what is happening at Tesla. They are driving billions of miles under AI control.

It is totally fine to dislike Musk or to think that autopilot is unsafe today, but how can you look at billions of miles and say that it isn't going to work within the next few years.



There is a long long tail of edge cases that kill people.

When drivers go a few trillion miles per year in the US, any anomaly in your driving software is going to be measured in deaths per hour. There is a built-in human sympathy for accidents caused by people which will not be at all present with accidents caused by machines. You can't sure a maker for wrongful death by a faulty human, you can easily sue a car company.

It will just take one cute kid getting killed by an errant robot car to end the dream completely. Machines don't make human mistakes, they make mistakes that are alien and frightening, things you couldn't ever a person imagine doing.

The public will get scared and automatic driving will be banned.

For Tesla and everyone else doing unattended driving, it isn't a matter of covering the last few bits left, it's a matter of increasing precision by several factors of ten, and ultimately, I'm guessing robot cars won't be a reality until their intelligence starts to reach the point where people are asking questions about sentience.


That's easy: recording billions of miles of data is only part of solving the problem.

I'm sure they'll get there eventually, but having a ton of data isn't a cheat code that makes rapid progress inevitable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: