Hacker News new | past | comments | ask | show | jobs | submit login

> so where is your proposal to limit the passengers of a self driving car

That limit is in place for student drivers to limit distractions; it's a specific mitigation to a problem specific to them. FSD needs its own.

> where is your proposal for compromise?

I'd like to see safety data reporting requirements that come from regulators, not Tesla, whose self-reported cherry-picked data points I find quite suspect. (Example of this issue: https://www.latimes.com/business/story/2022-12-27/tesla-stop...)

I'd like to see safety-critical beta software in cars undergo independent audits prior to widespread release. (My dream would be for it to be open source, but that's probably unrealistic.)

I'd like to see formalized safety testing processes of such software at the regulatory level, similar to how crash testing is currently conducted.

I'm sure others have specific, useful suggestions.

> there are none because nobody who likes pouring cold water on tesla is coming from a place of intellectual honesty.

This, ironically, doesn't sound like it comes from a place of intellectual honesty.

> how about for the countless lives that will be saved by this technology?

I certainly hope that happens someday.




these compromises are productive and show that you do come from a place of intellectual honesty.

if the federal government had been tasked with overseeing the early versions of fsd, it would have been swiftly shut down because of the nature of the federal government, not to mention the politics. but thanks to the private sector we now have modern fsd which is bar none the most advanced and capable self-driving solution in the world. now that self driving has gotten this far, its probably much less likely to be aborted if subjected to government intervention and oversight. in light of the huge benefits that self-driving cars stand to create, measured in human lives, compromise is the only rational proposal. shutting down fsd like mouth-breathing internet commenters talk about would be objectively wrong given the state of its competitors and the nature of the problem.

edit: your bio says 'fuck elon musk.' making a two dimensional character out of elon musk isnt a good way to understand him or his projects. when the time comes and elon musk uses his influence and money to do something really bad, it might be boy crys wolf thanks to your camp.


> your bio says 'fuck elon musk.'

My Twitter bio does. Added shortly after he banned links to Mastodon, broke Tweetbot (and lied about them breaking the rules), and announced breaking changes to the Twitter APIs I use extensively at work with a few days warning.


you seem passionate and knowlegable about this so i will ask you. i want to know more about the API changes. detractors say that it was at best an irresponsible change to the API that inconvenienced companies that use it. proponents say that musk simply stopped making the API free which was always unsustainable and people should have known better. what was really going on?


Entire businesses had their products cease to function, with no warning, and no explanation from Twitter, until a couple of days later they got vaguely libeled by Twitter's developer account. (https://twitter.com/TwitterDev/status/1615405842735714304)

They then announced the free API would go away entirely with a week's notice and pricing details "next week". (https://twitter.com/TwitterDev/status/1621026986784337922) That change got delayed several times.

Absolute clown show. If they'd said "in 90 days we're shutting down third-party clients and implementing a paid tier", people would've grumbled but seen it as fairly reasonable. Kneecapping devs who've been building Twitter apps and integrations for a decade was cruel and unnecessary.


I 100% agree with you that we should have regulators auditing and verifying safety information for autonomous systems.

But I'd like to point out that the link you included is out-of-date. Tesla has continued to publish their autopilot safety numbers in their quarterly slide decks. Here is Q3 2022 for example, see page 10: https://tesla-cdn.thron.com/static/SVCPTV_2022_Q4_Quarterly_...

Miles between accidents on Autopilot Q4 2021: 4.3 million miles Q1 2022: 6.5 million miles Q2 2022: 5.1 million miles Q4 2022: 6.2 million miles


That’s the same old sketchy number they like to tout.

Autopilot can only be used in safer conditions, and if the car goes “whoops I’m out, take over” shortly before an accident that doesn’t count in that stat either.


It’s not an old number, it’s a new number reported every quarter.

But you’re right that comparing largely highway miles vs all miles isn’t completely fair. FSD on the other hand can be activated and used in most scenarios and has 3.2 million miles between accidents vs the US average of 500,000 miles. So still quite a bit safer but less so than autopilot.

As for autopilot deactivating right before an accident, if autopilot was active within 5 seconds of the accident it is still attributed to autopilot, not the human driver.


> It’s not an old number, it’s a new number reported every quarter.

"Old number" here means "the same old stat they trot out every time". The value gets updated; the concerns over its being a cherry-picked apples-to-oranges comparison remain.

FSD still nopes out in the most challenging circumstances, which are the circumstances where accidents are far more likely to happen. It's like a surgeon bragging about their low complication rate; if they run out of the OR screaming when something unexpected happens and their colleague has to take over, it's not a super useful stat.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: