I almost have trouble imagining realistic scenarios that involve deliberately endangering the occupants of the vehicle in order to protect others. (And any manufacturer who designed such a system is probably not going to find many takers.) And, for that matter, there are very few scenarios where swerving and running into something at speed is a better decision than braking hard.
I'm the same, I can't really come up with any realistic vehicle accident scenario that would be a classic trolley problem. I may just be lacking in imagination though. I'd be interested if anyone could present a realistic example of such a scenario, and one that isn't wildly unlikely ever to happen.
Something that does occur to me is that since humans tend to just automatically react in extreme situations whereas an AI would presumably have precious milliseconds to consider different scenarios, maybe an AI could create a trolley problem where it wouldn't exist for a human.
For example, if someone steps out from behind a bus a human will most likely slam the brakes on instantly and will likely slide into the pedestrian at a slower but possibly still fatal pace; whereas an AI having a bit of time to think calmly about it might slam the car hard into the bus whilst also braking, which might slow the car down enough to prevent too much injury to the pedestrian whilst causing damage to the bus, writing off the car, and possibly injuring the occupant (though the car safety features should help).
>possibly injuring the occupant (though the car safety features should help)
This seems to take the opposite viewpoint of the OP which said the "obvious" answer is to do whatever is best for the passengers.
>one that isn't wildly unlikely ever to happen.
A lot of the comments here are along these lines and, to a certain extent, they miss the point. Risk is a combination of probability and severity even before considering ethics. In the cases where probability can be reliably calculated, there needs to be threshold about what risks are accepted in order to implement informed decisions. I'll try to illustrate a couple examples.
Say there is software that decides to take a specific action based on sensor input. Maybe the action is to accelerate and swerve to the left if an obstacle is detected to be oncoming from on the right at an intersection. Let's make it somewhat more complicated by having a jaywalker stepping off the curb on the left. There are (at least) two prospective events:
1) The car does not perform evasive maneuvers and increases the risk of being hit by the other vehicle
2) The car does perform evasive maneuvers and increases the risk of hitting the pedestrian
Each has a probability and a severity. To make things simple, say the probability of each is 5%. However, they have different severities. In scenario 1) the severity to the pedestrian may be risk of injury or death even if the severity to the driver is fairly low (probably repair work covered by insurance). But in scenario 2) the severity to the pedestrian is negligible while the severity to the driver(s) is moderate. With modern cars, it's unlikely either driver will be killed is fairly small but there is an increased risk of sub-fatal injuries and likely much more damage to the vehicles.
Remember, the risk = probability x severity. So if we take the simplistic approach of only taking in the perspective of the driver ("do whatever favors the passengers of the car") we have effectively minimized the passengers risk of moderate injury by increasing the pedestrian risk of grievance injury. Adding ethics gets even more complicated; does it change if the pedestrian is a child? A mother pushing a stroller? A homeless person?
Another example would be what if the car is driving by a school playground and notices a ball roll between parked cars and into the road? A human can intuit playground + ball = a higher probability a child may run into the street after the ball. This might be enough to cause a human driver to apply the brakes hard even if a child isn't immediately visible. Would software intuit the same? Even if it did, would the car do the same if it increased the risk of a collision from behind?
The safety-critical software world spend a lot of time on small-probability events. The more recent 737-Max issue has similar through-lines, though there are many confounding issues as is typical with safety mishaps. The MCAS was classified as "hazardous" severity and by company policy required two AOA sensors to reduce the probability of failure. This would effectively reduce the risk since risk = probability x severity. However, since the severity was "hazardous" and not "catastrophic" (i.e., they didn't think it could cause a plane to crash), the extra sensor was an optional software feature. Had the severity been appropriately attributed, I think there is a greater likelihood that the second AOA sensor would be a mandatory feature because they (I assume) have a specific risk threshold they are aiming for. All this to say, it's not enough just to say "These are small probability events so they don't have to be addressed" but rather probability is just a part of an overall risk strategy.
The decision space with safety critical software is extremely large precisely because you must mitigate low probability events if the severity is large enough. I sometimes wonder if the fact that self-driving technology targets are missed is attributable to naivety of oversimplifying the problem.
Thanks for the detailed reply. I will say first of all that I am an autonomous sceptic. I think it could be useful for some situations but not in all situations.
I would say that your first example really exemplifies my point about realistic examples. You start off well but then your point about ethics goes a bit off the rails in my opinion. I struggle to think of how to explain this point but I think it all comes down to "intelligence".
Us humans are, for the most part, absolutely terrible in situations like your first example. It's basically impossible to make any kind of rational decision in such a short time frame, we just act instinctively. There is no way we could make any kind of logical or ethical decision we would mostly just slam on the brakes - and possible swerve accidentally depending on what we did with our hands in the stress of the moment.
So you are already asking our autonomous technology to make decisions that a human couldn't make in the same situation. Now that is fine as computers can process information much faster than humans, but then the question becomes "how much faster?"
Lets face it: your example is asking a computer to process a huge information and then make a complex decision regarding that information in about one second. I'm not convinced that is even possible. Not only that you then start talking about ethics. How on earth is your computer going to work out if someone is a homeless person? In my personal opinion you are asking too much from the computer in too short a time frame.
Us humans make cost/benefit decisions regarding safety all the time. No system is going to eliminate all deaths from vehicles; there will always be situations we can't predict. If you start going down the rabbit hole of "let's try to evaluate all risks no matter how improbable" you will never get anywhere because in such a chaotic environment as a busy city centre it is virtually impossible to predict everything that could possibly go wrong.
Unless we can create a general AI (which I'm not convinced is possible and even if it were, why would it want to cart us about) then we will never be able to create an autonomous car that can assess unexpected situations in a way that would satisfy human standards. I think the best we can do is to have them operate in more constrained environments where the risks can be reduced, and have humans drive in other more chaotic environments.
I'd love to write more but I have to get to work and I don't want to bore you :-)