Eliezer really needs to learn how to stick to his areas of expertise. When he talks about decision theory or the mathematical formalisms of AI it can be spectacular; when he wades into economics or quantum mechanics it's frequently embarrassing.
As someone who is degree-qualified in the area of economics, I have to say I prefer this facetious and reasonably well-informed takedown of bad pop economics (and his HP fanfic!) to his utterly serious entertainment of ridiculous ideas like Roko's Basilisk in the area of expertise he's dedicating his life to...
(Rothbard's law: people tend to specialize in what they are worst at [also applied to Rothbard])
Eliezer never took Roko's Basilisk seriously; unfortunately, there's a great deal of misinformation about this online. Here's what happened:
"What I considered to be obvious common sense was that you did not spread potential information hazards because it would be a crappy thing to do to someone. The problem wasn't Roko's post itself, about CEV, being correct. That thought never occurred to me for a fraction of a second. The problem was that Roko's post seemed near in idea-space to a large class of potential hazards, all of which, regardless of their plausibility, had the property that they presented no potential benefit to anyone. They were pure infohazards. The only thing they could possibly do was be detrimental to brains that represented them, if one of the possible variants of the idea turned out to be repairable of the obvious objections and defeaters. So I deleted it, because on my worldview there was no reason not to. I did not want LessWrong.com to be a place where people were exposed to potential infohazards because somebody like me thought they were being clever about reasoning that they probably weren't infohazards. On my view, the key fact about Roko's Basilisk wasn't that it was plausible, or implausible, the key fact was just that shoving it in people's faces seemed like a fundamentally crap thing to do because there was no upside.
Again, I deleted that post not because I had decided that this thing probably presented a real hazard, but because I was afraid some unknown variant of it might, and because it seemed to me like the obvious General Procedure For Handling Things That Might Be Infohazards said you shouldn't post them to the Internet. If you look at the original SF story where the term "basilisk" was coined, it's about a mind-erasing image and the.... trolls, I guess, though the story predates modern trolling, who go around spraypainting the Basilisk on walls, using computer guidance so they don't know themselves what the Basilisk looks like, in hopes the Basilisk will erase some innocent mind, for the lulz. These people are the villains of the story. The good guys, of course, try to erase the Basilisk from the walls. Painting Basilisks on walls is a crap thing to do. Since there was no upside to being exposed to Roko's Basilisk, its probability of being true was irrelevant. And Roko himself had thought this was a thing that might actually work. So I yelled at Roko for violating basic sanity about infohazards for stupid reasons, and then deleted the post. He, by his own lights, had violated the obvious code for the ethical handling of infohazards, conditional on such things existing, and I was indignant about this." https://www.reddit.com/r/Futurology/comments/2cm2eg/rokos_ba...
I fail to see the misinformation. If he believes in "infohazards" and takes his "timeless decision theory" seriously, then he took this seriously. In fact he took it seriously enough to delete the thread. Nothing he said contradicts these facts.
I'll say it. The guy is a quack. A quack in the robes mathematics rather than medicine, but a quack all the same. No one in philosophy or in mathematics takes these ideas seriously. He never puts these ideas up to peer review, but rather sticks them on his blog where a bunch of people that think of themselves as "smart" and "deep thinkers" read them and nod in agreement. Yet in the end all of these are very very fringe ideas with no evidence either logically or physically to support them.
In the end, Roko's Basilisk, and these ideas of the some potentially benevolent or malevolent AI that controls the the simulation in which we live are indistinguishable from God and his ideas about them are indistinguishable from the same flawed centuries old Pascal's Wager complete with the exact same logical flaws pointed out centuries ago.
possible benefit to hearing about roko's basilisk:
Consider the following beginner-level logical paradox. There are two statements,
1. Santa Claus exists
2. Both of these sentences are false.
They cannot both be true, since 2 says they are both false. They cannot both be false, or 2 would become true. Therefore exactly one of them must be true, and one false. 2 cannot be true, because it says it is false itself. Therefore 1 must be true, and 2 false. Therefore Santa Claus exists.
If you hear the above, you might say "Well, self-referential binary logical statements are fraught, and easy to abuse for purposes of confusing people, in ways that are not obvious to the neophyte." You are now more resistant to hucksters who might use the above style of logic to convince you of something genuinely harmful.
Likewise, being aware of Roko's basilisk, and dismissing it for good reasons, might increase your "philosophical bullshit self-defense rating", leaving you a stronger person.
"The method of Bulverism is to "assume that your opponent is wrong, and explain his error". The Bulverist assumes a speaker's argument is invalid or false and then explains why the speaker came to make that mistake, attacking the speaker or the speaker's motive."