Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not only wildly disconnected, but purposefully created to show ambiguity of rules when interpreted by beings without empathy. All of Asimov's books that include the laws also include them being unintentionally broken through some edge-case.


It was weird to actually read I, Robot and discover that the entire book is a collection of short stories about those laws going wrong. Far as I know, Asimov never actually told a story where those laws were a good thing.


They aren't generally potrayed as bad, either, just as things which are not as simple as they first appear. Even in the story where the AIs basically run the economy and some humans figure out that they are surreptitiously suppressing opposition to this arrangement (with the hypothesized emergent zeroth law of not allowing humanity to come to harm), Asimov doesn't really seem to believe that this is entirely a bad thing.


The Foundation series is arguably that, but you only find out in book 14 or so.


The 0th law worked out pretty good for Daniel and humanity


c'mon now you know not everybody made it all the way to Foundation and Earth. :D

for some reason that one wasn't even included in the list of books in the series on the inside jacket of the other books that I had.

I remember I had to really hunt for it and it was from a different publisher. never knew why.


Well, "everything worked out according to plan and nobody got hurt" doesn't make for a very interesting story ;)


Just sit right back and you'll hear a tale, A tale of a fateful trip That started from this tropic port. Aboard this tiny ship. They got lost but they called for help, and now they're totally fine. And now they're totally fine.


And obviously all these stories have already been fed into the machine.... :-)


> show ambiguity of rules when interpreted by beings without empathy

I don’t think that’s the main problem, there are a lot of moral dilemmas where even humans can’t agree what’s right.


Humans not agreeing has more to do with the fact that humans are called upon to take decisions with imperfect information under time constraints.

If each human could pause the state of the world and gather all information and then decide, they would act humanely


Not at all. Even if you have a completely hypothetical situation with well defined circumstances, the responses will be all over the place.

Just think of abortion, wars or legalizing drugs. People disagree completely over those because nobody agrees which choice would be the moral one.


You are suggesting today's state of affairs to remain the same after people do the pause the world literature survey. This is exactly opposite to my point of view. At least they will be well informed as to why their views are different.


The existence of trolley problems is a counterpoint to your comment.


There is so much literature on Trolley problem that they will come out enlightened on so much stuff.


Well it's quite difficult to come up with much better rules than Asimov's.

HPMOR offers a solution called 'coherent extrapolated volition' – ordering the super intelligent machine to not obey the stated rules to the letter, but to act in the spirit of the rules instead. Figure out what the authors of the rules would have wished for, even though they failed to put it in writing.

We are debating scifi, of course.


> Figure out what the authors of the rules would have wished for

What if the original author was from long ago and doesn't share modern sensibilities? Of course you can compensate when formulating them to some extent, but I imagine there will always be potential issues.


Exactly! That was kind of the point IMO, that human morality was deeply complex and ‘the right thing’ couldn’t be expressed with some trite high level directives.


All of fiction is a distortion of sorts. Consider Wall-E movie fat people. The AI advancements shown in the movie should transitively imply that biotech, biomedical progress would be so high that we would have solved perfect health by then.


Not really. As history shows, progress in one field of science/engineering/philosophy doesn't necessarily imply progress in others.


Which is because we don't have intelligence on tap. If we did, we would actually put intelligence to use in all the subjects. I would rather put intelligence to use on biology rather than philosphy.


More just that the rules are actually a summary of a very complex set of behaviours, and that those behaviours can interact with each other and unusual situations in unexpected ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: