"test a system with our own lives" is actually an understatement; FSD Beta is being tested on other peoples' lives without their consent. Other drivers, pedestrians, etc.
so when you have a student driver, or a new driver, the lives of not just them but the people around them are being risked just to test a system. so we should just stop all new drivers. no more new drivers. because a new driver, especially a teenage one, is orders of magnitude more dangerous than the current fsd.
Student drivers are typically subject to significant limitations - no other kids in car, no driving past certain hours, parental supervision for a certain period of time, etc. They're not learning for a billionaire's profit, and they're fairly unlikely to get a software update that causes a whole bunch of them to make the same mistake in a short period of time.
so where is your proposal to limit the passengers of a self driving car, all of the countless brands who are advertising level 2 systems? where is your proposal for compromise? there are none because nobody who likes pouring cold water on tesla is coming from a place of intellectual honesty.
so self driving cars could only exist for a billionaires profit? how about for the countless lives that will be saved by this technology? tens of thousands of people die every year. cmon dude.
I'd like to see safety-critical beta software in cars undergo independent audits prior to widespread release. (My dream would be for it to be open source, but that's probably unrealistic.)
I'd like to see formalized safety testing processes of such software at the regulatory level, similar to how crash testing is currently conducted.
I'm sure others have specific, useful suggestions.
> there are none because nobody who likes pouring cold water on tesla is coming from a place of intellectual honesty.
This, ironically, doesn't sound like it comes from a place of intellectual honesty.
> how about for the countless lives that will be saved by this technology?
these compromises are productive and show that you do come from a place of intellectual honesty.
if the federal government had been tasked with overseeing the early versions of fsd, it would have been swiftly shut down because of the nature of the federal government, not to mention the politics. but thanks to the private sector we now have modern fsd which is bar none the most advanced and capable self-driving solution in the world. now that self driving has gotten this far, its probably much less likely to be aborted if subjected to government intervention and oversight. in light of the huge benefits that self-driving cars stand to create, measured in human lives, compromise is the only rational proposal. shutting down fsd like mouth-breathing internet commenters talk about would be objectively wrong given the state of its competitors and the nature of the problem.
edit: your bio says 'fuck elon musk.' making a two dimensional character out of elon musk isnt a good way to understand him or his projects. when the time comes and elon musk uses his influence and money to do something really bad, it might be boy crys wolf thanks to your camp.
My Twitter bio does. Added shortly after he banned links to Mastodon, broke Tweetbot (and lied about them breaking the rules), and announced breaking changes to the Twitter APIs I use extensively at work with a few days warning.
you seem passionate and knowlegable about this so i will ask you. i want to know more about the API changes. detractors say that it was at best an irresponsible change to the API that inconvenienced companies that use it. proponents say that musk simply stopped making the API free which was always unsustainable and people should have known better. what was really going on?
Entire businesses had their products cease to function, with no warning, and no explanation from Twitter, until a couple of days later they got vaguely libeled by Twitter's developer account. (https://twitter.com/TwitterDev/status/1615405842735714304)
Absolute clown show. If they'd said "in 90 days we're shutting down third-party clients and implementing a paid tier", people would've grumbled but seen it as fairly reasonable. Kneecapping devs who've been building Twitter apps and integrations for a decade was cruel and unnecessary.
I 100% agree with you that we should have regulators auditing and verifying safety information for autonomous systems.
But I'd like to point out that the link you included is out-of-date. Tesla has continued to publish their autopilot safety numbers in their quarterly slide decks. Here is Q3 2022 for example, see page 10: https://tesla-cdn.thron.com/static/SVCPTV_2022_Q4_Quarterly_...
Miles between accidents on Autopilot
Q4 2021: 4.3 million miles
Q1 2022: 6.5 million miles
Q2 2022: 5.1 million miles
Q4 2022: 6.2 million miles
That’s the same old sketchy number they like to tout.
Autopilot can only be used in safer conditions, and if the car goes “whoops I’m out, take over” shortly before an accident that doesn’t count in that stat either.
It’s not an old number, it’s a new number reported every quarter.
But you’re right that comparing largely highway miles vs all miles isn’t completely fair. FSD on the other hand can be activated and used in most scenarios and has 3.2 million miles between accidents vs the US average of 500,000 miles. So still quite a bit safer but less so than autopilot.
As for autopilot deactivating right before an accident, if autopilot was active within 5 seconds of the accident it is still attributed to autopilot, not the human driver.
> It’s not an old number, it’s a new number reported every quarter.
"Old number" here means "the same old stat they trot out every time". The value gets updated; the concerns over its being a cherry-picked apples-to-oranges comparison remain.
FSD still nopes out in the most challenging circumstances, which are the circumstances where accidents are far more likely to happen. It's like a surgeon bragging about their low complication rate; if they run out of the OR screaming when something unexpected happens and their colleague has to take over, it's not a super useful stat.