Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would be more inclined to agree with the California or Nevada approach, i.e. extensive testing and stringent reporting. Only in that manner will the safety of driverless cars approach that of airlines, which is what will be required if a massive backlash is not to occur the first time one of these cars kills someone.


I don't see any backlash at all for the ~90 people that were killed today in the US in motor vehicles [1]. I doubt there will be a massive backlash, unless the vehicles were operating in a known dangerous setup. These vehicles are filled with loads of sensors so accident reconstruction will be trivial.

1. https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_i...


We've become used to such accidents, and usually blame them on the driver.

In the event of a driverless car killing someone, the manufacturer will have to shoulder the blame; and Google/Uber/etc are much more attractive targets for lawsuits than regular drivers are.


Then have your self-driving vehicles owned by a trust or corporate entity with no assets.


When automated cars become a consumer good I would certainly agree with the more stringent approach but I think light regulation is fine until that happens and also that Google et al will be pushing for that when the time comes. The thing that makes the FAA such a model of effectiveness compared to other regulatory bodies is that it's interests are aligned with the industry as a whole. When a plane (or driverless car) crashes everybody in the industry suffers a loss of business. Everybody would like to slack off a bit on safety themselves while their competitors are held to the highest standards, of course, but they'd mostly all rather have high standards as a general rule so the regulatory capture that occurs makes the FAA more focused on safety. All because consumers have alternatives to flying and are intrinsically more scared of flying than of other means of transportation.

And it will be the same with automated cars. People are intrinsically predisposed to not trust automated cars and people will still be perfectly capable of driving cars themselves. In order for automated cars to take off it's important that you don't have Bob's Drive By Night autocars crashing and killing lots of people and the Google's and Ubers of the world know that.


I would be inclined to agree with extensive testing being desirable in this case. However, the only way I think that we're going to get a sufficient quantity of testing under representative conditions is to get a reasonably large number of these cars out on the roads.

We don't need to worry too much about their use in a non-testing context, I feel, when we look at companies actively pushing this stuff out they're either doing so in a highly controlled environment (transportation on mining sites for instance) or they're pushing it out in a highly limited form (for instance, self parking cars and lane keeping features.) Entirely automated driving is still experimental and the number of people who have access to it is still very limited, no-one's selling their current prototypes to the general public.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: