There's also the question of incident response, if a human driver "malfunctions" you take them out of service and the rest of the world keeps going, but if a self-driving model malfunctions there are potentially millions of vehicles running the same software ready to make exactly the same mistake until the issue is isolated and fixed. Should we ground the entire fleet of vehicles running that software until the issue is resolved and software re-certified, if the software is demonstrably dangerous? How much would that cost?