All of this during maybe 30min of driving. Also, these are just the most severe instances, the Tesla just drives very awkwardly. It is changing lanes way too often (also on intersections) without rhyme or reason, and it weirdly creeps forward during red lights, getting very close to crossing traffic, for no reason at all.
All in all, this is an incredibly poor performance.
Because they're led to believe all this public beta "testing" actually does something to improve the system. So they go out on roads, enable FSD and click the feedback button when it makes a mistake. It's pure theatre as it does nothing to a system fundamentally poor.
From my observations I can’t show causality, but the three situations where I used the feedback button the most all got addressed.
- my city town inexplicably decided to make dashed white lines on the sides of bicycle lane sharing markers. The Tesla would see dashed white lines, which mean separate lanes going the same way and dance around trying to get into a lane but they were all too narrow. The traffic lane was double wide to account for church street parking, but not marked as such. So it the markings made a tiny center lane and two side lanes that were just a bit too small.
- on winding tree lined roads, when an on coming car first appeared around a bend it would assume the car might be going straight and would hit us. There would be a quick deceleration and dodge toward the shoulder until it realize the oncoming car was actually curving and would probably stay in its lane.
- when the pavement expands to include an unoccupied, unmarked parking lane for a short stretch the car would assume that the road suddenly got very wide lanes and it was too far to the center and jerk over.
At this point FSD does a passable job on my regular daily drives, except one intersection where something in its map is wrong, it wants to exit the right turn lane just before turning right. I don’t let it make turns in contested intersections yet, because it is a little timid and might confuse drivers. But that is improving. Just not there yet.
For exceptions, it does well finding and dealing with construction areas, garbage trucks, bicycles, and pedestrians.
I’d appreciate if they’d add “slightly dodge roadkill”. It will merrily run over the same dead squirrel four times a day with precision.
They /are/ the training data and trainers in a very literal sense. Every time they correct or take control over FSD I imagine it gets fed back into the training data. Will it ever create a fully self driving system? Maybe.
That's not true and Karpathy has confirmed it multiple times. They can create a fingerprint for data they want the system to feed back to the mothership and they may manually label datasets fed back through the report button on the MCU, but disengagements are not a signal they use at all.
It's useful for polishing an already mature system. But FSD is poor at even doing the basics because of their (lack of) sensors. They need more than low quality 30 second video clips if they ever want a usable autonomous system.
I’m sure other people use FSD as a bad chauffeur, but around town I use it when I’m willing to be extra alert and train it. There is a button I press when the car has messed up and the surrounding data gets sent back to the Tesla mothership as a defect.
If I’m not willing to be in vigilance mode I just drive myself.
And the beauty of not advertising that if an accident is coming the system deactivates itself to avoid any legal responsibilities. Autopilot that deactivates itself. Nice
Your "straw man!" claims ignore that for every 'safety' related comment or action Tesla has done, they've had to be dragged kicking and screaming into it.
First there was zero monitoring of attention. No steering wheel torque, nothing.
Then when pushed and threatened they added it, and set the "hands on steering wheel" interval to "ONCE every fifteen MINUTES".
Then they had to be threatened to reduce that to a sensible value.
All the while, Tesla: "the driver is only in the seat for legal purpose - the car is driving itself", or "use Summon to bring your car to you while you deal with a fussy child".
So you can call them lazy edge-case idiots all you like.
But you can't pretend that Tesla hasn't absolutely fostered and nurtured this culture, because they have.
We are terrible at maintaining 100% concentration over extended periods of time while nothing is happening. People fall asleep driving. The mind starts to wander. We get distracted by our children.
I don't think it's about believing the hype. It's about having a system put in place which encourages you to stop paying attention.
I do think there have been great improvements in car safety thanks to AI. My car self brakes if it detects a pedestrian while I'm in reverse. It flashes lights in my side mirrors if another vehicle passes by. I think those systems are safety first and if you buy a car they are the responsibility of the car maker and not the buyer. I don't think you would expect people checking the airplane model when taking a flight. It's the responsibility of the airline and the manufacturer.
"People who believe the system they paid 10-12k for called 'full self driving' should be able to drive itself are idiots" is a pretty special class of Elon bootlicking.
People who understand the system don't buy it because they realize it's a scam, everyone who bought it was duped. The most valuable thing at Tesla is the list of rubes they have.
The whole point should be you don't. You are giving live feedback to the most advanced ML learning system ever (probably). Which hopefully in a few years will actually work.
You are not (currently) using an autonomous car. If you think so you are an idiot.
> You are not (currently) using an autonomous car. If you think so you are an idiot.
Let's define autonomous as self driving. According to their marketing page[1]:
> Tesla cars come standard with advanced hardware capable of providing Autopilot features, and full self-driving capabilities—through software updates designed to improve functionality over time.
So it has capabilities of autonomous driving, which means a Tesla can be considered an autonomous car. Unless I'm an idiot for trusting what their marketing page tells me, of course.
They phrase it weirdly (accidentally/possibly/probably/certainly (take your pick) in order to confuse the readers), but all that says is that they eventually will provide the software running on the hardware you buy today to actually do it.
> Tesla cars come standard with advanced hardware capable of providing Autopilot features, and full self-driving capabilities—through software updates designed to improve functionality over time.
What I understand is this:
Tesla cars come with hardware capable of providing features for the Autopilot system and come with full self driving capabilities. The functionality related to these capabilities will be improved over time through software updates.
If they meant that there will eventually be software that works with the hardware in the system, but there isn't software today, they should word it differently.
The cars come with advanced hardware. What is the hardware capable of? It is capable of self driving. How will the hardware achieve the thing it is capable of? Through software updates (that have not yet landed)
That might be what some weaselly lawyer argues it means in court, or what someone deeply familiar with it all would infer. But in no way is that how any reasonable lay person would read it.
"Improve functionality over time" implies that the basic functionality of full self-driving is there already, and that it will be improved. No reasonable person would take it to mean "does not exist yet, and may never do".
They would also take "capable of" to mean "right now", not at some nebulous point in the future, especially given the context of the feature being a paid upgrade.
The definition of "full" is "not lacking or omitting anything; complete". One might reasonably expect "full self-driving" to mean a human does not need to intervene or monitor. I.e. What the industry commonly calls autonomous driving level 5. They clarify what they mean elsewhere on their web site:
> Autopilot, Enhanced Autopilot and Full Self-Driving Capability
> are intended for use with a fully attentive driver, who has their
> hands on the wheel and is prepared to take over at any moment.
> While these features are designed to become more capable over time,
> the currently enabled features do not make the vehicle autonomous.
So they don't even claim level 1 autonomy.
Even with your version, the hardware may very well never be capable of full self-driving at all. There's no evidence to suggest it will be - hopes, dreams and promises don't count. And they've done an upgrade on the earlier-gen Model S cars already, where they presumably made similar claims.
The whole thing is misleading at best, downright trading standards fraud at worst.
For what it's worth, my Model 3's carriageway departure warning kicks in randomly and the amount of phantom braking it does even on basic cruise control is terrible. Autopilot is worse. I find monitoring it in autopilot mode significantly more stressful than just driving the car myself. I wouldn't trust FSD at all, and certainly not enough to want to fork out the silly money it is.
That is terrifying! I try to stay away from Teslas I see on the road because they are completely unpredictable. I think I'd prefer being near a poor driver with their attention on the controls to Autopilot with a poor driver not paying attention.
Any test on public roads of a system that performs like this should have a professional team actively, directly monitoring it. Not just any rando who happened to be able to afford a Tesla.
The number of times I see Teslas suddenly swerve partially into the next lane or stomp the brakes for no reason on the freeway with nobody around them is ridiculous.
To top things off, one time I saw this happening on 280 in the Bay Area and I decided for safety's sake to pass them. I looked inside and it was a baby in a car seat in the back and a guy at the driver seat with both hands off the wheel holding onto a phone that he was absorbed by.
Meanwhile, I've been driving mine daily with autopilot for over a year daily, and can count on one hand the amount of times I've needed to take manual control.
My anecdotal experience is they're incredibly predictable and with an attentive driver (as you should always be in any car), completely a non issue.
You can't make an apples-to-apples comparison between humans and the current generation of "AI".
A human does not need to be taught how to handle every edge case because we have the ability and intuition to think on the spot and create solutions on the fly. It's incredible how fast our brain can take in information and formulate a plan.
Meanwhile, if an "AI" has not specifically been trained against a certain variable or situation, it will have no idea how to handle it. It can't actually create a solution like a real human brain can.
Here is another idiot "testing" it out in front a school bus. The car wouldn't stop so he had to intervene: https://youtu.be/zQO6RJCUPEI?t=10
I'm continually surprised how it's still being allowed on public roads. Tesla claims it's not an autonomous system to regulators and refuses to provide any data, while marketing it as imminent Level 5 self driving. It seems like it will take someone to be seriously injured or die for any action to occur.
> Here is another idiot "testing" it out in front a school bus.
He's anything but an idiot: it helps raises awareness of the serious issues Tesla self-driving have. He was also obviously ready to act immediately, which he did, and no kids were ever at risk in that vid.
Some other individual might not be ready to act immediately. This is really where you want trained safety drivers sent by the engineering team to do targeted testing.
The rough part is how the guy's obviously making a decent amount of money with the YouTube series on Tesla. The more dramatic, the better, which is only going to further incentivise risking others.
I don't know if the guy is an idiot or not but I personally wouldn't have handed the wheel over to an AI right next to a school bus in order to find out if it was going to run the kids over.
I'd like to know why other adults who did not sign up to part of a distributed beta test are overlooked in your calculus.
Or has the tech industry beaten any semblance of self-respect out of everyone to the point it is just expected the technocrats get to use everyone as guinea pigs?
??? I thought that was what it was for. Or maybe for making wacky collision vids on YouTube?
Tesla just closed their whole self-driving office in San Mateo, and fired ("for poor performance") most of the staff, directing some to come to work across the bay. For now.
People still employed on that activity would best update their résumés. People who paid for self-driving already? I would ask for a refund, in your shoes. (I wonder if they will comply.) People fired for cause without due process? Sue. Really. How scummy is it to do a mass layoff, and make toxic shit up just to avoid complying with laws for mass layoffs?
Supposedly the "important" people are still employed. For now.
I can’t imagine there is a way to get a refund, but if I could I would. 10k on something they won’t even let me beta test because where I live triggers incorrect forward collision warnings too frequently for my driving to be considered good enough.
I was skeptical but paid up front, and I’m pretty sure I got taken
I don’t think the “safety score” has much to do with safety. I suspect it’s a way to ensure that only the most obsessed/devoted fans will get access to the beta, since there seems to be no way to get a high enough score through normal driving in a populated area. You basically have to be willing to devote a ton of time to “gaming” your score, rather than just driving places.
Yup. The simplicity of the system, combined with the fact that the method of calculating the score is publicly available, means that it encourages drivers (in some circumstances) to drive less safely. It also doesn't include rapid acceleration or speeding despite those being pretty good predictors of purposeful unsafe driving, probably because those are part of the reason you buy a Tesla in the first place.
Beyond that, the way it is calculated makes it less of a safety score and more of a risk score; it can't distinguish between a person driving effectively in a risky environment (safe) and a person creating a risky environment (unsafe). Some portion of the testers are then people who actively avoid safe defensive driving techniques in risky environments to maintain their safety score. That type of person is the absolute last person you want testing an autonomous system.
This video is a great overview of FSD beta safety, I've linked to the section specifically about the safety score: https://youtu.be/sHyOL_vDQMQ?t=801
It’s plausible that the goal isn’t to select for safety at all — the goal could be to select for drivers for whom, for whatever reason, FSD has a chance of actually working.
> they won’t even let me beta test because where I live triggers incorrect forward collision warnings too frequently for my driving to be considered good enough
WTF. And I guess they will still compare "Autopilot" accidents with your average driver's, when they are literally combing (supervisor) drivers for their skill?
Talk to a lemon law attorney. If your car doesn't do what it's supposed to do, it might be a lemon, or at least lemon adjacent enough to be handled by such an attorney. Don't fight with Tesla on Twitter.
You need to set your forward collision warnings to 'early'. The safety score ding happens at medium. If you have it set to 'late', you will get dinged and never even know it until later.
I’ve got a baby in the car, the noise of the alert wakes him up. It’s for a bridge at the top of a hill on my way home, triggers every time. It’s not valid
Sue, I guess. Seems like they would settle quickly because fighting tens of thousands of lawsuits would look bad. And, maybe anyway cost more than settling.
I think for the money I’d be in small claims court against the skill of Tesla’s legal team, I’ll get washed so I don’t think it’s worth my time.
Honestly, I didn’t even want full FSD. Tesla dropped the price of my car by 10k after I’d purchased but before I took delivery, they gave me this as a way to smooth it over so I didn’t just cancel my delivery, eat the 2k fee and re-order saving myself 8k. I didn’t trust Musk to deliver on his promises, but that was 2018-19. In my wildest dreams I didn’t imagine it would be 2022 and they just fired the team doing it, and I still don’t even have access to a beta test.
IANAL, but I expect that the killer lawsuit will come from the estate of someone killed by a Tesla who doesn’t own a Tesla. No contract = no arbitration or much leeway for Tesla to claim that the driver should have known better.
Once in rome I saw a car parked, a car crashed into it for some reason, and the driver just left the car in the middle of the road, crashed into another car.
Whenever somebody calls musk the smartest man in the world, they don't understand the connection between data labeling and Tesla's propensity to stop for nothing and crash into stopped emergency vehicles.
OpenPilot doesn’t have a data labeling team. I think you should do some more research before doing a drive by comment on something you know nothing about.
Are you saying that it is OpenPilot that is responsible for Teslas crashing into stopped vehicles and barriers and turning illegally in front of trams; and that more or better data labeling would not improve that behavior?
Because I didn't see any mention of "OpenPilot", as such.
Yes, really. The world is obese with fantastical conclusions propagated by terrible practice of first hand experience and thus spreads rumors and worse.
Yes, it is a certainty. Abundantly clear from direct experience myself, and consistent evidence between other experiences that it is at least a descriptor of a symptom of the increasing concern of conflict in our zeitgeist.
Speak of what you know, for what you don't, be quiet. Something like that.
> Supposedly the "important" people are still employed. For now
I don't know if you're unaware of this or omitting it to mislead others, but everyone laid off were data labelers. This is completely consistent with a strategy shift to preserve capital: in-house annotation isn't that common for AV companies anyway. And data labelers are 100% the most "unimportant" in-house workers: their job is literally to be as consistent and mechanical as possible. In contrast to those defining and building the labeling procrsses, their job is to be as mechanical as possible.
I'm not a fan of Tesla. I think their approach to self-driving is irresponsible. But the level of dishonesty in conversation about them is astonishing, even on fora like this that are ostensibly higher-quality.
Other coverage, like CNN, made it sound like the layoffs were concentrated along annotators. But your comment made me double-check the original reporting[1], which AFAICT says that the layoffs were both data evaluators and annotators.
Granting you the pedantry point here, I hereby happily amend my comment to refer to "data [evaluators] and annotators", with zero impact on my point.
> So, who is being dishonest?
Not sure if you're actually familiar with the definition of the word? Being potentially technically wrong on a pt that doesn't change the overall claim is enormously different from the core of a claim being wrong (eg the implication that "supposedly important people haven't been laid off" when they demonstrably focused the layoffs on the most commoditizable and replaceable labor in the industry).
If you complain about the quality of discussion, the onus is on you to be better. But, you are not better, or even as good. My original "important" was more accurate than your criticism taking it as an example of unfair dialog and, supposedly, correcting it.
But you do prove your point: discussion of the topic was here, with your comments, overall only a little better than your own contribution. Without, though, substantially better.
Meanwhile Cruise just obtained permission to charge passengers for driverless rides in San Francisco [1]. Cruise has been cleared to charge for rides in vehicles that will have no other people in them besides the passengers. There will be no back-up human driver present to take control if something goes wrong.
Why is Tesla so far behind? Tesla is generating these autopilot mishap news on such a regular basis that it is giving the entire industry a bad name.
Presumably, these regulators are the same, that allowed this Tesla very alpha software to be tested on public roads. Also from your own reference it does not seem like Cruise capabilities are much above an amusement park level toy ride:
"...The regulators issued the permit despite safety concerns arising from Cruise's inability to pick up and drop off passengers at the curb in its autonomous taxis, requiring the vehicles to double park in traffic lanes..."
"...vehicles confined to transporting passengers in less congested parts of San Francisco from 10 p.m. to 6 a.m. Those restrictions are designed to minimize chances of the robotic taxis causing property damage, injuries or death if something goes awry..."
"... Cruise's driverless service won't be allowed to operate in heavy rain or fog either..."
Plus it seems the system is another alpha/beta release:
Yeah but how often do you see negative stories about Cruise and Waymo? Where lives are endangered? The story about cars stopped in the middle of a street doesn't count since lives were not in danger.
In this case a Cruise car was rear ended by a Cruise car driven by a human, because their unsophisticated software makes for the car to break hard. As explained in the article:
"...That said, the rear-endings demonstrate that the technology is far from perfect. Cruise cars follow road laws to a T, coming to full stops at stop signs and braking for yellow lights. But human drivers don’t—and Cruise cars will be self-driving among humans for decades to come..."
"...The fact that a driver Cruise trained to work with these vehicles still managed to rear-end one emphasizes exactly how flawed they are..."
This story is from 2018...the lidar-focused portion of the AV industry has advanced lightyears since then.
You may as well post an article about Eliza to make a pt about DallE.
And the limited domain you're describing for Cruise is the domain in which they've reached the statistical safety performance required to fully remove a safety driver. They and Waymo are driving all over the city during the day, with very few disengagements: it's been a couple years since they've been able to post credible evidence of long and complex disengagement-free drives.
The last epsilon% of safety performance is the hardest, and I'm not convinced they or Waymo can crack it soon. But your comments are horribly misinformed.
"...The regulators issued the permit despite safety concerns arising from Cruise's inability to pick up and drop off passengers at the curb in its autonomous taxis, requiring the vehicles to double park in traffic lanes..."
And after all those light years the LIDAR focused industry can't still see in fog, dust, rain or snow. An Eliza, just with more compute power and some badly
understood ML algos.
Yes, I have seen the demo videos of self driving cars on snow.
> Presumably, these regulators are the same, that allowed this Tesla very alpha software to be tested on public roads.
I don't think Tesla has permission from CPUC to run its public testing program. Last I read (articles from a year ago), Telsa is telling CPUC that Full Self Driving is only Level 2; CPUC only regulates levels 3 and up.
If we’re going to try to have autonomous vehicles, at some point these things have to be tested on real roads. Would you rather have a real human behind the wheel to take over in an instant or just have the computer plow through whatever if there’s a bug? When you are driving tesla “autopilot” you are still liable for everything the car does, you still need to be attentive just like you normally would. If the car crashes despite that, then you weren’t paying enough attention and that is your fault. In my experience any time it did something stupid, there was always plenty of time for me to take over and easily avoid an accident. In other words, it’s not going to rapidly swerve into oncoming traffic at 70mph. If you don’t want to accept the legal risk of what your car does on beta autopilot software or don’t want to continue paying attention to the roads while on autopilot then you shouldn’t be using it.
You get way more distracted if you are doing nothing rather than if you're actually doing something. So I'd expect the reaction times for people to stop an autopilot from killing them are actually much higher than the reaction times of a driver.
Then they should not be using autopilot. When I have autopilot engaged I am way more alert. Also you aren’t “doing nothing” when you have autopilot engaged. The car annoys you if it doesn’t sense any force on the wheel within 10 seconds and even if you are looking at your phone (or anywhere other than the road) the car annoys you too (yes there’s a camera in the cabin looking at you when you’re on autopilot).
So what you mean is that Tesla should be using trained safety drivers like real AV companies do? Also, to be clear, you are either lying or misinformed about the behavior if autopilot as far as detecting engaged driving goes.
The point is acknowledging that over a million people die per year on the roads due to human error and trying to help beta test software that would help bring that number down over 90% so that our kids can have a better future.
Before testing on real roads, these things should be tested in realistic closed environments. Tesla has been too cheap and lazy to take that basic safety measure.
Is Tesla so far behind or are we just not seeing all the dangerous behavior of Cruise-piloted vehicles because they haven't made their software available to the general public?
IMHO none of this software should be getting tested on public roads.
Tesla is behind because they had a fucking big pile of C/C++ + machine learned models and wanted to brute force their way to an acceptable quality.
In a recentish Lex Fridman podcast Elon finally talks about how they need to solve the hard part which turns out to be ... drum roll .. world modeling! (Kids in a school zone or behind a school bus or something like that was the example I think.)
Who would have thought that without a model that can provide a consistent "abstract model" of the world it's hard to fit incoming noisy sensor data?
Cruise and Waymo have lidar. Lidar can give you much more certainty about detecting objects and distances. Teslas only use cameras and have to do complex image analysis to detect objects.
On the other hand lidar is more expensive. But it is getting cheaper and it does seem like it might be necessary for actual self driving.
The train appears on the car's HUD and the distance appears to be calculated properly before it tries to turn in front of it. I don't think object detection is what went wrong in this case (which is scarier imo).
The human brain is far faster at image processing than any computer you may be able to put in a car. So shortcuts are necessary. Perhaps one shortcut is to get actual reliable object shape and position data from Lidar than trying to infer it from flat images.
Tesla is far ahead. Cruise was literally in the news like 2 days ago because their entire fleet just stopped driving in the middle of an intersection and dozens of their staff had to drive out to the cars and manually take them back to the parking lot. Cruise works (except for when it doesn't) in extremely specific geofenced areas in very low traffic. Tesla works anywhere and in a lot more conditions.
Tesla is only far ahead in their willingness to take on risk and make at least partially untrue claims. There are multiple companies running actual robo-taxis in other cities but not a single city Tesla is running actual driverless vehicles. If Tesla was actually capable of running robo-taxis in one of the cities with the regulatory atmosphere they would, but they can't.
All about where it works. Geofenced solutions are much easier.
I wouldn’t say they’re behind, just solving a different problem. Should they have focused on inside city and highway driving? Probably. Would’ve had a good product sooner. Even if they don’t win this whole thing, they pushed the entire industry in a good direction.
Tesla is selling a large number of vehicles with FSD, used across a large operational design domain.
Cruise isn't selling cars. Small fleet, that can operate at limited hours at night, at limited locations in San Francisco. Small scale and limited operational design domain compared to Tesla's.
Because Cruise does not need a back-up human driver present. Tesla seems far behind because as this story indicates, a human needs to be paying close attention when Tesla is in charge.
Overall, Cruise/Waymo appear to have the better approach. I would rather start with a system that works and is reliable and expand its capabilities, than start with a system that barely works and hack it till it works. Especially when human lives are endangered by the tech.
Because as much as Tesla marketing wants us to believe, FSD is anything but. Tesla is so ridiculously far behind real AV companies that I must assume that you're a paid shill or an absolute idiot.
The answer is that Tesla is creating a general solution, whereas Waymo and Cruise and targeting specific geographical areas which they can map precisely.
Waymo and Cruise are also using more expensive, bulky and unsightly hardware, including lidar. They are starting at the top, with a solution that works, then working their way down. Their approach is confidence-inspiring. Tesla is starting at the bottom. The frequent mishap stories coming from Tesla does not inspire confidence.
You can also start at the bottom like most European manufacturers, and slowly build up to autonomous driving.
That is, start with collision warning, lane assist etc. Boom, you have a Volvo that drives itself on a highway, and requires driver assistance elsewhere.
Because they don't have Lidar. They're just attempting to solve a completely different problem, which is much harder. IMO it's a horribly irresponsible approach
Why is the US allowing this beta testing of a feature that impacts safety to continue? There have to be at least dozens of documented cases of this being unsafe. Have regulators just given up or are they that incompetent?
Part of it is that government regulation of new things is slow. Part of it is that Congress is barely functional, at best. Part of it is that Republicans are anti- the government doing anything useful.
California has been making some noises about cracking down on this, there was a story on HN a few weeks back. My sense is that some people will need to die in a sensational way, that happens to go viral, before anything really goes anywhere.
The crafting of regulations (not laws) and their implementation is the domain of the agencies under the executive branch. This is where the scientists and researchers are hired - neither the legislative nor judicial branches routinely employ them.
The funding of these agencies has been consistently cut back over the past decades resulting in a slower moving body and one that is not as able to employ experts in the various domains who understand that technology and issues as part of a career.
Additionally, the recent Supreme Court decision against the EPA further makes it difficult for the agencies to make and enforce those regulations. NPR Marketplace : Today’s Supreme Court decision is about more than the EPA- https://www.marketplace.org/shows/marketplace/todays-supreme... gets into the SCOTUS ruling and how it attacking what is called the administrative state. https://www.npr.org/2022/01/30/1076844670/supreme-court-just... also goes into the rolling back of the administrative ability of the executive branch.
Administrative law is a farce and clear violation of the intent expressed in the Constitution. When more laws are entering the books through the executive and judiciary than through the legislature, you've got a problem.
Congress sets executive agency scope. Not executive branch employees or judges out of force of habit.
If they want increased scope, best ask the People if that's alright, and get the People leaning on their legislators.
Coz Tesla has a lot of money and the government agencies that should be intervening are hobbled due to decades of under funding and being de-fanged by other parts of the government.
From my perspective in the US, the NHTSA which investigates crashes, regulates automobile safety, and handles recalls, etc, seems highly competent regarding general auto safety and investigating issues.
But politicians and the press by and large were drinking the self-driving Kool-Aid for the last 10 years, and so there’s some level of political risk for the NHTSA to clamp down hard on something politicians have bought into without some level of popular outrage to back them up. So I think not exactly “given up” but more “afraid to poke the hornets’ nest”.
Although as self-driving proves more and more to be a dud/fraud, I hope that changes. I’ve been surprised they didn’t take a stronger stance on it after the spate of crashes into emergency vehicles on shoulder, but they may have been (perhaps rightly) afraid blowback from Trump would gut their powers entirely.
Any system based on a human doing nothing for long periods and then being expected to act swiftly and accurately is _deeply_ flawed. Human attention doesn't work like that. This is russian roulette with the general public forced to play.
So FSD beta testing has been around for what, a couple years? To continue the tortured analogy: We’ve been playing this Russian roulette for millions of miles.
How many times to people have to be “almost” shot, but not actually shot, before we concede that maybe the gun has no bullets?
> How many times to people have to be “almost” shot, but not actually shot, before we concede that maybe the gun has no bullets?
You say that like we have anything like good data. We have pretty much what Tesla's marketing team put together on their website, and _maybe_ what some cities or states have cobbled together.
And before we assume that Telsa wouldn't lie, let's remember that they call this "Full Self-Driving".
Because it hasn't impacted safety, there have been no known accidents so far with over 100k users. It's also the driver's responsibility to disengage at first sign of trouble, just like as it is on a freeway with cruise control on any other car. The problem is many of these Youtubers try to push the system to test its limits, and wait until the last second to disengage, or let it do the wrong thing (for clicks? look at the "oh" face in the video thumbnail).
Now there is a good argument that as FSD gets better it actually gets less safe, because drivers stop paying attention if it rarely makes a mistake. I think you can battle this kind of attention fatigue by making simple driving L4, but require attention for unprotected lefts and other more risky moments. Of course they'd need to have that approved, but having millions of vehicles on the road they should quickly be able to prove which kinds of driving FSD can do well enough for L4. To be clear: at of right now there is no kind of driving it can do to that safety level.
What a dumb comment. It literally consists of just a claim that someone is invested in a competitor and then the implicit association that of course anything they say, regardless of how true and damning it is, just be ignored. Only the truest cultists and the dumbest others would buy it.
Exactly. There definitely are some scary Youtube videos, but how do you ban something that's apparently safer [0] than the average road user?
As far as I'm aware, there have been zero fatal FSD beta crashes. I hope it'll stay that way.
What would be your arguments as the governing body for a ban? I think you'd need some hard statistical evidence, random Youtube videos might not count.
One thing about this video that caught my attention: around 14:40, Tesla is in the "forward" lane, but it tries to make a left turn. While I agree that this is against the rules, my experience with driving around a big city is: half of the human drivers would try to do the same, if they accidentally ended up in the wrong lane. So, if Tesla is using data captured while human is driving, can it pick up bad habits as well ?
Yes, it would not surprise me. I once had Google Maps asking me to go against a one-way road, but I think their algorithms are better as it is very rare I encounter stupidities like that.
it's absolutely surreal to me that 1) this is allowed to be used on public roads and 2) that Tesla is permitted to call whatever they've invented "self driving" or "autonomous".
I really don't understand how Tesla markets something as "Autopilot" and "Full Self-Driving" and then also says "oh it's beta" and "you must be ready to take control of the car at any second".
It’s not autopilot, it’s Autopilot. They’re making up their own branded feature called Autopilot which is nothing like what any reasonable person would consider to be autopilot.
If you go back through the years here, you’ll see increasing care they use with their language:
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Not only was this clearly false in 2016, it’s clearly false today, nearly 6 years later.
From the viewpoint of Tesla’s marketing and legal departments, the nice thing about that phrase is that there’s no way to prove it clearly is false.
Their claim isn’t about their cars; it’s about the fact that some software can be created that gives their cars “a self-driving capability at a safety level substantially greater than that of a human driver”. Even if they never manage to deliver such software, that’s no proof such software doesn’t exist.
No snark intended, but doesn't having to constantly pay attention and be ready to take over control negate a lot of the benefit of the self-driving features? It seems like you'd have to apply just as much mental energy to the task as if you were actually driving, so the only real benefit is that you don't have to physically make the motions. In fact in some ways it seems mentally worse, since when you're driving you aren't surprised by sudden turns in front of trams and the like. With self-driving, there's a constant possibility of those kind of surprises, so I would think it would feel a lot like riding in the passenger seat with a new teenage driver.
Worse: it's like driving with an alien driver. The teenage driver at least shares your language and some "human" cognitive patterns with you. You may communicate with it, you may receive its feedback, and you may even understand when he struggles with something so that you pay more attention to correct him (i.e. because it's highly likely he struggles at the same places you do)
No such luck with this ridiculous software. An incoming train that any human (with normal sight) would have no problems to detect? The car drives right into it. You literally have to pay full attention during even the simplest of situations because there is no way to understand how this piece of software actually thinks, much less to predict how it will misbehave.
Today it's an incoming train that's painted white, tomorrow is a cricket on the road that causes it swerve onto oncoming traffic, or a small paint chip on a traffic sign that causes the OCR to read "speed limit 200" instead of 20.
> An incoming train that any human (with normal sight) would have no problems to detect?
To be fair, failing to yield to oncoming traffic when turning left is an extremely common mistake for human drivers to make as well.
In fact, earlier in the video, the car correctly yielded to an oncoming car, the human driver overrode it, then complained that the oncoming car "cut him off"! https://youtu.be/yxX4tDkSc_g?t=494
Because your brain is freed up from taking in all of the data points for constantly steering and speeding up and slowing down, it's incredibly less exhausting. Instead, you can focus on observing a lot more of the road and capturing things that you previously would have been unable to focus on. It's not often it deviates from expected behavior and catches you off guard with some radically unpredictable maneuver. When driving longer distances between cities, it's a night and day difference compared to my car that only has radar cruise control but still requires steering. Perhaps it's something you have to experience firsthand to truly see how much less exhausting of an experience it is.
By taking away throttle control and steering, you leave yourself with only decision making. It's reducing major components of normal driving behavior. It's not perfect, but knowing its strengths and weaknesses allows for better driving in my opinion.
It could definitely be one of those experience it to understand it things, since I've never driven very far in a car with more advanced self-driving (trying it in a colleague's Tesla on a short drive to lunch is about the extent of my experience).
I'm a private pilot though and what you're describing sounds a lot like the autopilots on small planes. It handles keeping course and altitude for you, so you can focus on the rest of the flight and it is indeed way less taxing than hand flying a longer distance. And like Tesla, a plane autopilot will occasionally do something weird so you can't entirely ignore it. I would think the difference though is that in a plane, nothing the autopilot does will be fatal or dangerous very quickly, so you have plenty of time to notice and correct it as long as you aren't sleeping or something. When you're driving a car though, a second of steering in the wrong direction can quickly turn into an accident, so you as the driver have to be constantly ready to immediately take over. Even in the chance is low, as long as it's often enough to be a factor, you still need that instant readiness.
I wonder if the value is more on longer drives, as you said, vs city driving. The fatigue factor driving between cities is a bigger issue, and the driving tasks are simpler with less opportunity for the car to do something crazy than in busy city driving.
> By taking away throttle control and steering, you leave yourself with only decision making. It's reducing major components of normal driving behavior. It's not perfect, but knowing its strengths and weaknesses allows for better driving in my opinion.
Assuming you know its strengths and weaknesses. See next post, below.
I'm wondering if "reducing major components of normal driving behavior" is such a good idea?
What I'm getting at-- the delusion/illusion that the brain/mind is somehow separate from the body-- that learning can all be in the abstract without any connection to tangible reality.
How can you make throttle decisions if you don't have throttle control? Wouldn't having throttle control at all times mean you have more awareness of the machine you are controlling and can thus decide faster and act better? Because this control is already part of your thoughts?
Throttle control and steering _are_ decision making. If you can't do these as well as every other task you need to do, you don't belong behind a wheel, period.
And before i hear a "That's exclusionary", yes, excluding people unable to be safe while driving 2 tons of steel launched at high speeds is a good thing
I can see how that might be the case for a very new driver, but for anyone who has driven for any period, "throttle control and steering" is completely autonomous and requires negligible consideration. Instead my focus is entirely on exceptional situations. The current FSD seems like I now have to focus on throttle and steering and the exceptions just to keep guard that the system doesn't do something disastrous.
As others have said, I cannot fathom how having to babysit an occasionally suicidal driving system is better than having no system at all. It seems perfectly balanced at the absolutely worst point of both being reliable enough that you stop paying attention, but unreliable enough that it catches you not paying attention.
If an FSD system really let me sit and watch a movie or read or book or something -- if it was that good -- then I 100% get it and would be signed up. But one that you have to babysit seems worse than pointless.
> for anyone who has driven for any period, "throttle control and steering" is completely autonomous and requires negligible consideration.
As someone who has driven a Tesla 10s of thousands of km now and was driving for decades before, I disagree. I can see why you'd think that, however it's like a muscle that's been tight for years. You're so used to it, you don't noticr it anymore. Once it loosens, you're surprised how much better it feels.
I used a Comma 3 (somewhat close to FSD, just simpler) and I found that it allows to you look forward in time more then you normally would. Normally you might be looking at the first 50ft for issues, now I find I'm able to expand out to 200ft and look around more for traffic.
It's not that I say 50ft is the only zone, but its my main focus, that and traffic around me (Mind you, this is all highway) With the Comma, it moves out a good bit, Allowing me to process more of the future traffic going on and the more edge cases that might come up at highway speeds. Also its vision and radar based so it keeps me 250ft+ from the car in front of me and keeps speed, it won't ever change lanes. This is why its more of a LKAS with better settings.
Of course there are cases the system doesn't work at all and is not well suited for surface roads very well.
Its behavior is very predictable and easy to feel when its seeing something I'm not. (Slowing down to match speed 1:1 with the car in front of me when they let off the gas is a common example)
50 ft is 15 meters and change, at regular car speeds that is much shorter than your reaction speed. It's essentially an accident waiting to happen. At highway speeds you should be looking about 200 ft ahead of you for safe driving. See the 'two second rule', and that's for clear skies, daylight:
Every day I drive, I see dozens of incidents where I think someone should be ticketed for reckless driving: tailgating, excessive speed, unexpected lane changes, etc
They never, ever are. We do not enforce good driving except after people die (or even then, vehicular homicide has low sentences).
If you can't keep your visual scan out further without relying on a driver assistance system then you're incompetent and shouldn't be allowed behind the wheel in the first place.
I guess to pay devil's advocate here the question you need to be asking is if it would still be worth it if a loved one was driving your Tesla and they crashed because your Tesla drove them into a train.
Even if Tesla Autopilot was significantly safer than a human driver I know I would still struggle knowing a love one died because the AI in my car decide to make a really stupid (albeit unlikely) decision. Right or wrong, human error is something we've always tolerated in society. If you're using a power tool and injure yourself, so long as there is nothing that can be reasonably done to make that tool safer, it's on you. But if once in a blue moon the tool malfunctions and injures you, that is something we don't tolerate.
It seems Autopilot at the moment exists in this in-between place where the driver assumes full responsibility so all situations are considered human error even if the AI is clearly malfunctioning. The more advanced autopilot gets the more we're going to have to question whether it's reasonable for the driver to expect autopilot to malfunction and take full responsibility for their car's actions.
I think of it in terms of I am still piloting the vehicle, but FSD is executing the maneuvers. This means that I am able to focus on the bigger picture (looking further forward, paying attention to what is going on in all lanes, etc).
I am a MUCH safer driver with autopilot/FSD than without.
So, to your devil's advocate, I am sure I would blame myself if my family member died while driving my car no matter what the car was. But FSD in my car has prevented accidents for me and my family.
Edited to add:
I don't want to pretend it is without risks. My biggest fear with FSD/autopilot is that it requires a constant understanding of who is in control. And that is the thing I stress to my friends/family. Your #1 job, and the thing that can kill you, is thinking that autopilot is engaged when it isn't and running a red light or driving off the road on an interstate exit.
Tesla gave me a loaner for a couple days while my car was in the shop. The loaner did not have FSD, and there were a couple of times I expected it to stop and it didn't. It was mentally tough b/c it was familiar like my car, but it wasn't my car.
> I am still piloting the vehicle, but FSD is executing the maneuvers.
Look, there are some things I really like about your reply, and I applaud your approach to this.
My question is this: If you are the pilot, how do you communicate with the FSD to have it execute the maneuvers? What is the command interface? What is the level of "manouver" ("drive to the grocery store" vs "angle the front wheels 7 degrees to the left to initiate a left turn")
> "there were a couple of times I expected [the loaner] to stop and it didn't."
So these times, in your car, are circumstances where you expect the car to stop. Did you (as the pilot) tell the car to execute the stopping manouver?
> Your #1 job, and the thing that can kill you, is thinking that autopilot is engaged when it isn't and running a red light "
What if it is engaged and still decides to run a red light? Or to suddenly brake to a full stop on a highway so you get rear-ended?
> What if it is engaged and still decides to run a red light?
The Teslas don't drive particularly aggressively, so you're quite aware ahead of time if it's not slowing down to stop for a red light.
> Or to suddenly brake to a full stop on a highway so you get rear-ended?
This is harder to deal with as, if it happened, it could be very sudden. That being said, if someone rear ends you for stopping, it's their fault (unless you braked in bad faith, like a brake check). Not that it would make you feel much better if you got injured or killed in such an incident.
On the other hand, many manufacturers have automatic emergency braking systems that occasionally incorrectly brake, and humans occasionally incorrectly emergency brake too. I have emergency braked due to a plastic bag blowing across the highway. Tesla on the whole seems to perform well compared to most vehicles AEB systems.
- turn signal to initiate a turn.
- scroll up/down on the right steering wheel control to increase/decrease max speed
- left/right on the right steering wheel control to increase/decrease follow distance
- nudge the gas pedal to be a little more aggressive (starting from a stop sign, executing a turn that FSD has already started signaling, etc
- and sometimes I just take control via brake or stalk. I generally don't disengage with the wheel since that keeps the speed on auto.
As for when the loaner didn't stop, it didn't have FSD. So while it looked like my car, it didn't have the same features as my car (just basic autopilot, not FSD).
I have not had situations where FSD was engaged and failed to stop or brake from highway speeds.
> The loaner did not have FSD, and there were a couple of times I expected it to stop and it didn't. It was mentally tough b/c it was familiar like my car, but it wasn't my car.
You need to meditate on this. Hard. You're more right than even you realize. Your cars behavior will change on you overnight. Your access to charging infra, or even the capacity to access the full capabilities of your vehicle are locked away, and prone to change from day to day. You don't get to opt out. You don't get to say "No, this is fine exactly as is."
What you believe is your car, is by no means your car.
Interesting! Kind of like pair programming, but for driving. Do you think there are any interface cues that could make this style of working with the system more efficient?
So what are you doing other than supervising the car/driving?
That's what I don't get. You say it's useful. That implies you're doing something you weren't able to before, or getting some extra utility out of it, yet, from my understanding, you legally speaking should be no less attentive in the operation of this vehicle than any other.
In fact, you let your family operate it in FSD? Are you really doing them any favors? Driving is hard enough without having to consciously override an additional source of errors. You may be unintentionally training them to operate Autopilot/FSD instead of driving safely in general.
A guy I know said the tech is so good you let your guard down.
He said there are days on 101 around 4:30 when he puts his sunglasses on and closes his eyes. He knows he taking a calculated risk. He's in bumper to bumper traffic.
He also said their were times he let the car drive home, with one eye open because he was tired.
With tech that good, I can see where drivers let their guard down compared to a guy in a old truck with a clutch. A vehicle he knows inside and out. A vehicle that needs his constant attention because of the sketchy alignment done years ago. Plus--he just can't afford a fender bender, or worse an ever increasing frantic fine. Hell--the stress of the old car might keep him more alert? Kinda like being in the woods when you know it's your wits, or a bear might tear you apart?
A vehicle he can drive as well and as effortlessly as a pro marathon runner can run. A vehicle that needs as little attention as his legs when walking. Because he knows it inside and out, he knows how the sketchy alignment affects handling at low speeds and at high speeds.
I was going to say "aw well as he can walk", but then I remembered that many American's are not good walkers.
nice that you brought up fender-benders, though. That old truck, probably doesn't matter if the fender bends a little more. VS cost to repair the tesla if it gets a ding...
On familiar routes with nothing unusual going on, I also can let my guard down. But you do get alerts if not paying attention and will get kicked out of the beta after 5 or 6 of those. I have had 2 of those alerts in the course of the 6-7 months I have been on the beta.
It's a tram not a train, the driver does call it a tram, but labels the video a train...It's the new era of word twisting..inspired by FSD...
Edit: Still the same video, here driver has to break hard.
Would run a red light, get several people killed at that
large, busy, and fast intersection:
https://youtu.be/yxX4tDkSc_g?t=738
Insane....just insane, and that is being diplomatic.
I always post this in these threads, but my one experience driving a friend's Tesla on autopilot involved, after about two or three minutes of driving, a sudden attempt at 'correcting' onto the wrong side of a divided road at an intersection.
It was admittedly sort of a weird intersection (oddly-angled 4-way on a curve) but it did feel awfully enthusiastic about driving into oncoming traffic.
I'm just sitting here vexed about the mislabeling of a tram as a train. Like the guy himself called it a tram but then used the word train in the title. If you have to lie to get appint across it just takes away from that point, valid as it may be.
Unprotected left turns are one of the most dangerous situations in driving. I consider myself a competent driver, haven't been in an accident in over two decades, and I barely trust myself to make an unprotected left. I've heard that UPS (used to?) ban their drivers from making unprotected lefts. Not in a million years would I ever trust this maneuver to software. Humans screw it up all the time:
In theory, an unprotected left should be easier for software than a human. The issue with them for humans is that there are too many places traffic (and pedestrians) can come from. For computers, once you solve the problem of seeing traffic coming from one direction, Nx the computing power can see traffic coming from N directions.
The problem Tesla has is that they still can't do basic things reliably enough
Human drivers are also impatient, distractible, poor at judging speed, and have visual blind spots. But a big problem with a lot of unprotected lefts is poor visibility of oncoming traffic. The car just isn't in a great position to see down the road. Tesla has to deal with that same limitation.
Motorcycle rider here, cars doing left turns across oncoming traffic account for the majority of car x bike collisions. If the drivers' eyes aren't on you, you're about to join that statistic.
Just to add to that, the driver could also have failed to accurately judge the motorcycle's speed. Motorcycle's being so skinny and relatively uncommon makes drivers perceive them as going slower than they are and being farther away than they are.
It's one of the many reasons speeding is so dangerous on a motorcycle, the driver might see you but still think they have a gap because their perception of speed is all wrong. It's much safer to go the speed a car would be going so that you align with what the driver expects to see day to day.
That's probably a US-specific challenge. Unprotected left turns are much rarer in Europe due to widespread use of roundabouts which are probably much easier to handle for both humans and software.
They are common in Massachusetts but very few other states, though they do seem to be showing up in more places the last few years. I grew up with one nearby in Miami so I've never struggled with them but a U.S. driver facing a roundabout for the first time can be a comical and sometimes dangerous situation.
Seems like another example of Moravec's paradox[1]. The higher order decisions involved in driving seem hard because we're conscious of them. But our visual system is assumed to be "just two cameras" because we have no conscious awareness of what's going on when we create an internal representation of our surroundings using our eyes.
I mean, what are eyes except constantly refocusing cameras with changeable fields of view, light intake, quick focus backed by a brain that can take action on what is seen even before you actually visualise it?
To Musk and his goblins, that's basically just cameras. But it's okay, people keep assuring me he's a great engineer.
Or a complete fucking sociopath that's lied his way to success but hey
LiDAR breaks from bright signage reflection and water droplets on
sensor covers… every tech has challenges: look at all the LiDAR vehicles that have had accidents.
> Google asks me to solve CAPTCHA's of crossing walks and traffic lights because robots can't do it.
Any evidence this premise is true? It's entirely possible that they perfected crosswalk/traffic light identification, and they're continuing to use it for captcha purposes because it's easier than rewriting the captcha to use another form of images.
Why would they crowd source image identification that is already solved? There are plenty of other images they need human help on.
Besides if robots can do it then someone would be using them to solve CAPTCHAs. I recently started using a VPN and now google asks me all the time to solve them before completing my search, so I'm assuming that robotically searching Google is something people want to do.
> Why would they crowd source image identification that is already solved? There are plenty of other images they need human help on.
Because they'll need to set up an entire pipeline to get those images into the system. Meanwhile pictures from streetview is plentiful and free (both in terms of licensing costs and development costs)
>Besides if robots can do it then someone would be using them to solve CAPTCHAs.
Maybe such models are in the hands of the top AI companies (ie. companies making autopilot systems), and they're not really interested in starting a side business of captcha bypass?
The other possibility is that they're already in the hands of bad guys, they're just putting it under wraps so they can monetize it and not get it detected. For instance there's this: https://addons.mozilla.org/en-US/firefox/addon/buster-captch..., so the idea isn't too far fetched.
Basic autopilot for highway driving has been amazing but I turn it off the moment I get on to a street because I'm not comfortable with it. I've had both a Model S and now have a Model 3. I skipped the FSD package on the model 3 because it's just not ready.
We are living in the ‘good old days’ when you could deploy your self driving car with minimal regulatory oversight. In 5 years time it will be very different. The company that collects the most quality data in this laissez-faire environment may gain an unassailable lead.
The comments are a pile-on of criticism and how can this be legal. The question is what percentage of self driving hours lead to a situation like this, is it high enough to determine it's unsafe compared to non autopilot or statistically an outlier?
Someone on here[1] already compiled a list of 7 incidents in this video of a ~30 minute drive. 3 of these look like they could have resulted in a severe crash or even a fatality if the driver did not intervene (the train, proceeding into an intersection on a solid red light, ignoring pedestrians). That's a pretty bad batting average - most people don't nearly crash 3 times in half an hour of driving.
It should be banned without the ability to know ahead of time it is trustworthy within it's intended operating envelope. Said operating envelope should be clearly, and unambiguously explicable to a lay person.
Neither of those conditions are the case. I'd appreciate it if people would stop trying to use statistics to gove an unproven system being tested with a fundamentally unsafe methodology with minimal experimental controls a free pass.
No, I was just trying to work out what you were saying.
I do however think that it's wrong, why? Because human error is enough to deal with, couple this with now AI error. It makes society, crash investigation, designing roads way way more complicated.
Maybe if we ban human drivers 100% and optimize for cars then it would be a good idea. I just can't see cars not needing supervision for a very long time.
I doubt people are going to buy cars that might kill them by making "mistakes". We will see but I think it's going to be a hard sell. People can easily accept their own mistakes. A computer though?
It doesn't matter how statistically safe it is. Putting unreliable software like that in the streets causes more crashes than drivers. The sudden swervings, the phantom braking, the absolutely batshit insane behavior and the frankly dangerous "lidar is for scrubs we're only going to use cameras" behaviour and Tesla having autopilot turn off ever so slightly before a crash so that their logs can say that it wasn't on are so many reasons that it should be off the roads and Tesla fined so hard.
It doesn't matter because of the ways it behaves on that small percentage where it behaves like a blue collar on a cocaïne bender on a Friday night. It doesn't slam itself into a tree like a drunk driver in the middle of the night, it is so unpredictably bad that i half expect to see reports telling me it tried to take a train line because it's a shortcut.
This feels like an escalation from the usual cars and bollards and things. Possibly The Algorithm has become bored, and we can expect a series of robocars menacing more and more unlikely vehicles, culminating in a spectacular near-miss with a zeppelin.
A Tesla on Autopilot took out a helicopter today...
Elon Musk commented "That was totally rad, and his engineers were in the process of attempting to replicate the finding as a new "Vigilante" mode to become available next Thursday, probably."
The FSD beta signup clearly says autopilot may do the wrong thing at the worst time, so the user should expect that IF they want to try the FSD early access. Otherwise drive it manually or don’t opt in to FSD.
Most of the time my Tesla tries to turn into traffic is when the GPS is not entirely fixed so the car thinks it is in a different position. Of course this video is in full screen mode so we can’t see the GPS info.
I stay alert and enjoy testing this new and amazing software. I am responsible and if I’m not comfortable with the risk I can always turn it off or go back to the regular software.
Personal anecdote. I was behind a Tesla on a bit of mountainous stretch (single lane) on my drive home on a Sunday eve - so literally no traffic, and noticed the Tesla was painfully slow and almost at standstill. I realized it was on autopilot and literally sticking to the speed limits although the driver could have easily done at least 10mph over.
The big truth that dawned on me was, here in northeast US with mostly single lanes, these level X driving cars are going to stick to speed limits, and am totally curious to see how road rage is going to show up
I actually think that dropping the speed limits is part of the strategy (in urban roads near speeds have been changed from 30 to 20) is to make the roads safer for self-driving cars.
So, I think self-driving is going to be a slow experience.
Maybe once everyone has been slowed down, they'll bring out a premium self-driving product, where is you are rich your car is allowed to move more quickly.
Is anybody publicly tracking Teslas self-driving progress over time?
I have been googling a bit, but not found anything:
Is somebody out there driving the same route again and again , tracking the number of disengagements over time?
There are gazillions of YouTubers who constantly put out videos how they drive their Teslas. But they all do the opposite: They try to think about new routes for every video.
I wonder if there is not a single one out there who tracks and documents the progress?
This contraption is literally a disaster on wheels. Malfunctions on the public roads and essentially puts the driver and others on the road at risk. It's other counterpart FSD, 'Fools Self Driving' is even worse and is already being investigated.
There is zero point in defending the incompleteness of this so-called 'safety-critical' contraption. It simply does not work and is simply not ready.
It's a bit sad to imagine that Tesla's - perhaps to be considered reckless - release of their alpha/beta software on public roads could lead to legislation making it a lot harder for future companies trying to develop self-driving cars. So maybe Musk's pioneering work in this will actually set the field back in some way.
I'm a terrible driver and its probably best there is an automated system to back me up. And from my experience with my daily commute, there are also many other terrible drivers on the road.
While Autopilot isn't perfect, its better than just throwing your hands up and saying nothing can be done about accidents.
Not sure what this is supposed to tell me. The number of cars driven by humans and vehicles driven on autopilot is different by a huge magnitude, so whats the point of comparing absolute numbers of crashes. At the very least to make a fair comparison we need to use crashes per cars driven in a day by humans and compare that with autonomous vehicles.
The fact that these bugs actually can be fatal, some even random people who have nothing to do with this test should be very concerning at the very least and not bruahed away as but x accidents happen anyway already.
Was the train running a red light? This seems confusing for a human driver, too. I'm usually looking for cars and pedestrians, not trains when turning left.
Did you watch the video? There's absolutely nothing confusing about the tram coming towards the Tesla, whether it has the right-of-way or not. And there's nothing in the video that suggests the tram was running a red light.
> And there's nothing in the video that suggests the tram was running a red light.
There's also a set of green lights in the video that suggests that the tram was absolutely fine to go!
And an autopilot video showing it knew the train was there and still went anyway, despite also thinking that it was two unconnected busses.
IMO it looks to me, from looking at the screen, that it's just not factoring in the fact it's going to accelerate like it is - because it's a train. Even if it was a bus though, Autopilot is cutting that bus up.
No it was not. If you get a green light without a green left arrow this means that you can turn left only if you yield to on coming traffic. This is a pretty basic traffic rule. The tram had a green light. The tesla had a green light but it had to wait for the opposing lane to be clear before it could go.
If they do it may end up being a very permanent lesson, those things may look flimsy but they definitely aren't. A frontal collision between a truck and a tram in Amsterdam left the tram without so much as a dent, though it did have a couple of scratches in the paint work. The truck could have served as a mold to cast trams from.
Off topic, but ~40 years back, I saw a Dodge pickup truck rear-end a Miata. It stove in the whole front of the truck, leaving the Mazda unscratched. Working as Designed.
Funny, that at first glance it looked like that was going to go the other way. I've seen a similar accident with a two decades old Volvo being hit 'midships' by a recent model Citroen, in spite of their respective reputation for safety that did not at all came out the way I expected. The passenger compartment in modern cars is quite amazing with respect to the kind of crash it can be in and your expectation to be able to open all four doors.
Turning in front of them, cutting them off, passing when doors are open and people are getting off, driving in the reserved lanes (sometimes even getting stuck!). Yes, people are idiots around them.
* Almost driving right into a barrier because road is closed: https://youtu.be/yxX4tDkSc_g?t=426
* Choosing the wrong lane and getting confused: https://youtu.be/yxX4tDkSc_g?t=666
* Trying to run a red light, right into traffic: https://youtu.be/yxX4tDkSc_g?t=732
* Stopping in the middle of an intersection without reason: https://youtu.be/yxX4tDkSc_g?t=789
* Choosing the wrong lane for a left turn: https://youtu.be/yxX4tDkSc_g?t=805
* Constantly activating and deactivating the left turn-signal without reason (you can't even turn left there): https://youtu.be/yxX4tDkSc_g?t=902
* Unable to make a proper left turn, almost running into pedestrians: https://youtu.be/yxX4tDkSc_g?t=1033
All of this during maybe 30min of driving. Also, these are just the most severe instances, the Tesla just drives very awkwardly. It is changing lanes way too often (also on intersections) without rhyme or reason, and it weirdly creeps forward during red lights, getting very close to crossing traffic, for no reason at all.
All in all, this is an incredibly poor performance.