The sensors really aren't ready for it. For example, when I tested it parallel parked to a curb, the Model S decided to turn the wheels on its own and ended up scraping the rear wheel because it ran into the curb instead of going straight as it was originally aimed.
The current sensors and software are absolutely not ready for true self driving, that much is clear to me after driving the S for 6 months.
10+ year ago for Grand Challenge it cost some money (and good luck getting decent resolution stereo with decent FPS from a pair of 1M sensors, so most relied on lidar - $3K and you have minimally decent 3D of the scene ahead). Today the tens-megapixel sensors cost like nothing, along with CPU power to process it. One can have reasonable infrared too. Ultrasound sensors - cost nothing. Short distance lidar cost close to nothing too. Millimeter radar still probably cost a bit just because no mass production. When i see Google cars - Lexus SUV - they have at least minimally reasonable set of sensors. Nobody else comes even close. I don't understand why.
I'm not sure if the biggest challenge is sensors or software. I don't know what Tesla is running, but I have a strong feeling that software in the large is not up to the task of autonomous driving. Most software (including automotive) has only very limited realtime behavior due to memory allocations, OS preemption, interrupts, ..., error-prone programming languages like are used and resource (memory) usage is often unbounded. I can't imagine that Tesla or anybody else that produces self-driving features at the moment is using something like Ada Ravenscar or advanced static validation techniques through all components that are involved in the self-driving features - and which are often quite complicated (image recognition, etc.) and therefore hard to run in such an environment.
The current sensors and software are absolutely not ready for true self driving, that much is clear to me after driving the S for 6 months.