Self-driving cars are constantly subject to mini-trolley problems. By training on human data, the robots learn values that are aligned with what humans value. -- Ashok Elluswamy (VP AI/Autopilot at Tesla)
If they were using my data I'd be partly responsible, due to failing to swerve around the last few suicidal prairie dogs I rolled over. I hate when that happens but I don't attempt high speed evasions. But I would if it were something larger, human or not, out of self defense. And it's never happened but I hope I'd stomp and swerve for a toddler. I'm happy with an autopilot learning that rule set, even though I've lost too many cats under tires.
You probably get more honest answers by presenting a trolley problem and then requiring a response within a second. It's a great implicit bias probe.
You probably get more honest answers by presenting a trolley problem and then requiring a response within a second. It's a great implicit bias probe.