Because they're led to believe all this public beta "testing" actually does something to improve the system. So they go out on roads, enable FSD and click the feedback button when it makes a mistake. It's pure theatre as it does nothing to a system fundamentally poor.
From my observations I can’t show causality, but the three situations where I used the feedback button the most all got addressed.
- my city town inexplicably decided to make dashed white lines on the sides of bicycle lane sharing markers. The Tesla would see dashed white lines, which mean separate lanes going the same way and dance around trying to get into a lane but they were all too narrow. The traffic lane was double wide to account for church street parking, but not marked as such. So it the markings made a tiny center lane and two side lanes that were just a bit too small.
- on winding tree lined roads, when an on coming car first appeared around a bend it would assume the car might be going straight and would hit us. There would be a quick deceleration and dodge toward the shoulder until it realize the oncoming car was actually curving and would probably stay in its lane.
- when the pavement expands to include an unoccupied, unmarked parking lane for a short stretch the car would assume that the road suddenly got very wide lanes and it was too far to the center and jerk over.
At this point FSD does a passable job on my regular daily drives, except one intersection where something in its map is wrong, it wants to exit the right turn lane just before turning right. I don’t let it make turns in contested intersections yet, because it is a little timid and might confuse drivers. But that is improving. Just not there yet.
For exceptions, it does well finding and dealing with construction areas, garbage trucks, bicycles, and pedestrians.
I’d appreciate if they’d add “slightly dodge roadkill”. It will merrily run over the same dead squirrel four times a day with precision.
They /are/ the training data and trainers in a very literal sense. Every time they correct or take control over FSD I imagine it gets fed back into the training data. Will it ever create a fully self driving system? Maybe.
That's not true and Karpathy has confirmed it multiple times. They can create a fingerprint for data they want the system to feed back to the mothership and they may manually label datasets fed back through the report button on the MCU, but disengagements are not a signal they use at all.
It's useful for polishing an already mature system. But FSD is poor at even doing the basics because of their (lack of) sensors. They need more than low quality 30 second video clips if they ever want a usable autonomous system.
I’m sure other people use FSD as a bad chauffeur, but around town I use it when I’m willing to be extra alert and train it. There is a button I press when the car has messed up and the surrounding data gets sent back to the Tesla mothership as a defect.
If I’m not willing to be in vigilance mode I just drive myself.
And the beauty of not advertising that if an accident is coming the system deactivates itself to avoid any legal responsibilities. Autopilot that deactivates itself. Nice