I don't read these rationalist essays either, but you don't need to be a deep thinker to understand why any rational person would be afraid of AI and the singularity.
The AI will do what its programmed to do, but its programmers morality may not match my own. What more scary is that it may be developed with the morality of a corporation rather than a person. (That is to say, no morals at all.)
I think its perfectly justifiable to be scared of a very powerful being with no morals stomping around!
Those corporations are already superhuman entities with morals that don’t match ours. They do cause a lot of problems. Maybe it’s better to figure out how to fix that real, current problem rather than hypothetical future ones.
This parallel has been drawn. Charlie Stross [0] in particular thinks the main difference is that pre-digital AIs are capable of behaving much faster, so that other entities (countries, lawmakers, …) have time to react to them.
The AI will do what its programmed to do, but its programmers morality may not match my own. What more scary is that it may be developed with the morality of a corporation rather than a person. (That is to say, no morals at all.)
I think its perfectly justifiable to be scared of a very powerful being with no morals stomping around!