Hacker News new | past | comments | ask | show | jobs | submit login

Scary... software developers at these robotic cars and their mistakes/bugs aren’t just going to bring down a business application(lose money) but kill their customers and innocent drivers.

Progress to where it’s safer is going to be a killer and we the drivers on the road are unwilling guinea pigs to billionaires’ dreams/goals.




Willing guinea pig here. I'm going to do my best not to die, but I'm excited about this tech and willing to deal with the drawbacks of being an early adopter.


It's great that you're willing to die for Tesla's profit, but you realize that autonomous vehicle-induced crashes affect everyone on the road? Even someone who doesn't consent to Tesla's TOS is still sharing the road with potentially dangerous software.

Granted, individual drivers are awful enough that it probably doesn't make that huge a difference in danger. But would you still feel the way if a family member was killed in an accident where their human-controlled car was rear-ended by a Tesla?


I don't think he said he's willing to die. He said he's willing to test it, and presumably he'll continue concentrating and will manually take over if the car does something dangerous.


He came fairly close, honestly. "I'm going to do my best not to die, but I'm excited about this tech" is recognizing, and accepting that death is a possibility as a result of Tesla's self driving process.


There's risk of dying every time I get behind the wheel of any car. There's no benefit to pretending real risk doesn't exist in any given scenario. I'm confident that I can be attentive and cautious enough using this tech to keep the risk similar to what it would be just driving normally.


That's assuming he's always able to intervene before it kills him. The argument is that he may not always be able to or prevent AP from behaving erratically enough that results in killing someone else (while he saves his own life).


Everyone puts their lives in the hands of other machines daily, for instance brakes or automated elevators or medical devices; driving is an inherently dangerous activity. Computer or human won't change that. If we delay computers getting better than humans, we will just have status quo which is 30k or more road deaths yearly in the US.


Elevators and medical devices go through extensive testing and certification processes before they ever go near being put into service. And when they are updated, they again go through extensive testing and certification.

Teslas, on the other hand, apparently change their handling and driving profile overnight at the whim of the software engineers at Tesla, without even telling the drivers, and introducing bugs like the OP that are liable to get someone killed.

They are not the same, and comparing them only highlights the issues that Tesla has around their OTA update practice.


Insane such type of commits into production need to be government regulated and scrutinized.


> apparently change their handling and driving profile overnight at the whim of the software engineers at Tesla, without even telling the drivers

OTA updates only happen after confirmed by the users. Where did you hear that it happens without user intervention?


According to the linked Reddit poster “ Tesla's only release notes for this release were DOG MODE and SENTRY MODE. They don't tell you there is a massive change to AP and to reset your expectations.“


What part of this behavior do you think is covered by:

* Improved DOG MODE

* Improved SENTRY MODE

Which were the release notes for the update?


You're right. The owner/driver has to approve the update, if I recall. But putting responsibility to review and understand release notes on the car's owner seems kind of absurd. And that's assuming that you have accurate and descriptive release notes, which was most certainly not the case for the described instance. In any case, clicking "update" is such a rote behavior for users on computers, phones, and now their Teslas, I'd argue that it's effectively no different than an update happening without notice or user intervention.

There's only so much you can learn from even the best release notes, period. The ever-so-common "bug fixes," for example, is so broad that it effectively means nothing at all. At best, it's telling the end user "this little update just changes some stuff hidden under the hood. You won't notice anything, so don't give it any thought."


Disclosure seems like a red herring, if there is no real choice other than to accept the update. If I get an update to my car that says "this may cause your car to explode at random times", and I don't want to scrap it, the only thing I can do is look around and see if other people are ignoring the warning, and then rationalize that it won't happen to me.

You can't ever look at consent outside of the context of the best available alternative to agreeing to something.


But they agreed to the terms of service, what is everyone complaining about?


On the other hand, if enough people die because a company rushed self driving to market before it's ready there's a very real chance of knee jerk regulation setting the technology back even further.


> But would you still feel the way if a family member was killed in an accident where their human-controlled car was rear-ended by a Tesla?

I don't have a dog in this fight, but appeals to emotion in order to drive irrational thinking do not make for constructive debate.


> appeals to emotion in order to drive irrational thinking do not make for constructive debate.

In a perfect world, sure. Real world, you will never have an inherently emotional situation (road safety) where the only voices heard are those of completely detached individuals.

As humans, we have to figure out ways to connect with them, and empathize with what they're feeling. Simply dismissing their concerns as driven by emotion isn't a winning strategy.


Understanding the emotional reactions of people to situations is important. "How would you feel if"-type statements do not do that. They do, and are often intended to, shut down conversation instead of foster it.


I disagree.

The gp said they're a "Willing guinea pig".

The parent pointed out it isn't just their lives on the line, but others, potentially including their family.

That isn't irrational at all. Or an appeal to emotion.


> we the drivers on the road are unwilling guinea pigs to billionaires’ dreams/goals.

we'll adapt, ie. adjust our behavior to account for that new factor. Police for example have already learnt how to pretty safely stop a Tesla on autopilot with a driver sleeping behind the wheel and not reacting to any signals (because of being deadly drunk for example).


I like comments where you can't tell if the author is defending something, or absolutely condemning it.


Right now 30,000-40,000 people die in car crashes annually: https://en.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in.... And those are just fatalities: hundreds of thousands more are injured. How do we deal with those presently?

We just accept that amateurs should be hauling around at high speeds in several thousand pounds of missile.

The most relevant question is whether Tesla AP is safer or less safe than typical amateur drivers per 1,000 vehicle miles driven. I don't know the answer to that question.


Unwilling? Don't buy a Tesla. Don't believe the hype. It's that simple. Granted, there is nothing stopping any ECU from killing you, but I'd trust a company like Honda way before Tesla.


"Unwilling? Don't buy a Tesla."

That doesn't protect me from being killed by a Tesla. I am pretty neutral on the topic but I am getting the feeling that they are in danger of pushing out half baked stuff like we tend to do in software. For most things like software this is OK but maybe not for things that are moving at high speeds.


Reminder that it has been and still is the norm for the last half century for 30k people to die in car accidents each year. Many more injured and disabled.

Better yet, texting while driving increases the risk of an accident while driving 23x


Tesla crashes can injure people who don't drive a Tesla themselves.


So can Honda crashes.


Honda isn't experimenting on the public at large with unproven technology. One of their suppliers did, and ended up bankrupt as a result.


I generally have zero interest in cars and don't follow the new models, but my impression from articles I've seen in the last years is that Volvo is actually a top contender for driver assistant systems (when you don't fool yourself into thinking you have an autopilot, but you really want sensible safety augmentation features).

Is that impression accurate?


GM cruise is the best based on this article, when you take "keeping driver engaged" as one of the criteria: https://www.consumerreports.org/autonomous-driving/cadillac-...

Volvo's tech is last among the ones compared.


I have no idea. Volvo certainly has the culture to do something good out of it. But do they have the money and resources required? Today they are owned by Chinese Geely. I don't know what partnerships and capital they can work with to compete with the top contenders (who I assume have Silicon Valley capital behind them)


Geely has invested a lot into Volvo[1], and Volvo are innovating in interesting ways[2] . I would choose the electric Polestar 2 over a Tesla in the same price-bracket due to Volvo's culture of safety. Hopefully the cheaper versions will be released soon.

1. https://www.bloomberg.com/news/features/2018-05-24/volvo-is-...

2. https://arstechnica.com/cars/2019/02/volvo-spinoff-polestar-...


Waymo and Cadillac Super Cruise are generally considered the market leaders.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: