"If they asked humans to draft 77 news articles and later when back to analyze them for errors, how many would they find?"
It doesn't matter. One self-driving car who kills a person is not the same one person killing another person. A person is accountable, an AI isn't (at least not yet).
This feels like a strawman argument. I suspect the person you are replying to would agree with your last sentence. Can you think of any ways the two things might be perceived differently?
Your response implies that the comment is about the similarity of how dead someone is in each circumstance and then you take a position apparently opposite to the comment's author. To me, it stretches credulity that the comment was about that - my reading of it is that there are serious & interesting ethical/legal/existential questions at play with AI-induced death that we need to be grappling with. In this way, they are not the "same". Legally, who is to blame? How do we define "intent"? Are we OK with this becoming more normal? Putting lifespan issues aside, would you rather die of "natural causes", or because an AI device killed you?
The difference is that the person that (accidentally or not) killed another person will suffer consequences, aimed to deter others from doing the same. People have rights and are innocent unless proven guilty, we pay a price for that. Machines have no fear of consequences, no rights or freedom.
For the person that died and their loved ones, it might not make a difference, but I don't think that was the point OP was trying to make.
It doesn't matter. One self-driving car who kills a person is not the same one person killing another person. A person is accountable, an AI isn't (at least not yet).