Not only wildly disconnected, but purposefully created to show ambiguity of rules when interpreted by beings without empathy. All of Asimov's books that include the laws also include them being unintentionally broken through some edge-case.
It was weird to actually read I, Robot and discover that the entire book is a collection of short stories about those laws going wrong. Far as I know, Asimov never actually told a story where those laws were a good thing.
They aren't generally potrayed as bad, either, just as things which are not as simple as they first appear. Even in the story where the AIs basically run the economy and some humans figure out that they are surreptitiously suppressing opposition to this arrangement (with the hypothesized emergent zeroth law of not allowing humanity to come to harm), Asimov doesn't really seem to believe that this is entirely a bad thing.
Just sit right back and you'll hear a tale, A tale of a fateful trip
That started from this tropic port. Aboard this tiny ship. They got lost but they called for help, and now they're totally fine. And now they're totally fine.
You are suggesting today's state of affairs to remain the same after people do the pause the world literature survey. This is exactly opposite to my point of view. At least they will be well informed as to why their views are different.
Well it's quite difficult to come up with much better rules than Asimov's.
HPMOR offers a solution called 'coherent extrapolated volition' – ordering the super intelligent machine to not obey the stated rules to the letter, but to act in the spirit of the rules instead. Figure out what the authors of the rules would have wished for, even though they failed to put it in writing.
> Figure out what the authors of the rules would have wished for
What if the original author was from long ago and doesn't share modern sensibilities? Of course you can compensate when formulating them to some extent, but I imagine there will always be potential issues.
Exactly! That was kind of the point IMO, that human morality was deeply complex and ‘the right thing’ couldn’t be expressed with some trite high level directives.
All of fiction is a distortion of sorts. Consider Wall-E movie fat people. The AI advancements shown in the movie should transitively imply that biotech, biomedical progress would be so high that we would have solved perfect health by then.
Which is because we don't have intelligence on tap. If we did, we would actually put intelligence to use in all the subjects. I would rather put intelligence to use on biology rather than philosphy.
More just that the rules are actually a summary of a very complex set of behaviours, and that those behaviours can interact with each other and unusual situations in unexpected ways.