Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But that is due to the limited ability of humans to asses all those conditions instantly in a sudden unpredictable situation. The machines should in theory be much better at that.


Grab some introduction to ethics book and think how you would implement what's written in there - and how you choose which one is the right ethical theory for the situation you're in. We had a lecture on that (lecturer had MSc in C.S. and was doing a PhD in philosophy), and many problems are very hard to reason about.

Even if you don't plan to build such software, I can really recommend any computer scientist to get an idea of ethics; in addition to the lecturers slides we used the book "The Cambridge Handbook of Information and Computer Ethics". Research in that direction is only gaining traction right now (not only, but also the "how do you implement an ethical decision automaton?"-part).

Remember: We humans can be lucky to have intuition and reflexes, which kind of put us out of the ethical issue: Usually accidents happen so fast, you can't think about the ethically acceptable reaction and just act more or less random/whatever your training and reflexes tell you to do (or you're so in shock when you realize what's about to happen you can not react at all). If you run over a child because you evade an elder person that's a bad situation to be in either way (assuming your driving was within legal parameters); if the computer actively decides to run over person X instead of person Y due to some algorithmic decision (or coin flipping, doesn't really matter) that's basically worse - because it ultimately decided person Y deserves the right to live more than person X.

That's why there was/is that large hype involving the trolley problem: https://en.wikipedia.org/wiki/Trolley_problem


My comment was largely about a situation where there there is no ethical choice involved.

For example: something comes in front of a car and a decision needs to be made, whether to turn left into the opposite lane, or whether that is dangerous and only brakes should be applied. Humans have a hard time making this decision, because it's hard to asses situation quickly.

The decision to only apply brakes (a common advice to people) - is only safe in the sense that in most cases it will be the safe thing. There will be cases where it will not be safe - but they will be in minority.

A machine will be better equipped at assessing whether driving onto the opposite lane is dangerous in that specific situation. If it is not dangerous for anyone - the machine will be expected to do that, instead of applying brakes, because in that specific situation it will be the safest thing.

The choice here is not an ethical one, it's purely computational one. There are situations where there is no one in the opposite lane, and it is simply safe to turn there. But humans are generally not trusted with being able to make that choice - because of poor computational ability (in such restricted timeframes).

Of course there are ethical considerations for other situations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: