Too late to edit, so I'll post just a few examples here:
>The only way out of this mess is to design a moral fixed point, so that even through thousands and thousands of cycles of self-improvement the AI's value system remains stable, and its values are things like 'help people', 'don't kill anybody', 'listen to what people want'.
Bostrom absolutely did not say that the only way to inhibit a cataclysmic future for humans post-SAI was to design a "moral fixed point". In fact, many chapters of the book are dedicated to exploring the possibilities of ingraining desirable values in an AI, and the many pitfalls in each.
Regarding the Eliezer Yudkowsky quote, Bostrom spends several pages, IIRC, on that quote and how difficult it would be to apply to machine language, as well as what the quote even means. This author dismissively throws the quote in without acknowledgement of the tremendous nuance Bostrom applies to this line of thought. Indeed, this author does that throughout his article - regularly portraying Bostrom as a man who claimed absolute knowledge of the future of AI. That couldn't be further from the truth, as Bostrom opens the book with an explicit acknowledgement that much of the book may very well turn out to be incorrect, or based on assumptions that may never materialize.
Regarding "The Argument From My Roommate", the author seems to lack complete and utter awareness of the differences between a machine intelligence and human intelligence. That a superintelligent AI must have the complex motivations of the author's roommate is preposterous. A human is driven by a complex variety of push and pull factors, many stemming from the evolutionary biology of humans and our predecessors. A machine intelligence need not share any of that complexity.
Moreover, Bostrom specifically notes that while most humans may feel there is a huge gulf between the intellectual capabilities of an idiot and a genius, these are, in more absolute terms, minor differences. The fact that his roommate was/is apparently a smart individual likely would not put him anywhere near the capabilities of a superintelligent AI.
To me, this is the smoking gun. I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face, and thus, I highly doubt that the author actually read the book which he attacks so gratuitously.
Well, the thing is there is no such thing as 'machine intelligence', so it's all just an assumption on top of an assumption about a thing we don't have a very good grasp of yet.
You're essentially saying that the author is wrong for saying the philosopher's stone can't transmute 100 bars of iron to 100 bars of gold, because a philosopher's stone could absolutely do that type of thing, because that's what philosopher's stones do.
To walk down arguing the merits of this position, why must a machine intelligence 'not share any of that complexity' of a human intelligence? What suggests that intelligence is able to arise absent of complexity? Isn't the only current example of machine intelligence we currently have a property of feeding massive amounts of complex information into a program that gradually adjusts itself to its newly discovered outside world? Or are you suggesting that you could feed singular types of information to something that would then be classified as intelligent?
I did not say that the a machine intelligence mustn't share motivational complexity a la humans. I said that a SAI need not share such complexity. Those are two very different statements.
And to understand how/why a machine intelligence could arise without being substantially similar to a human intelligence and sharing similar motivations, well, I suggest you read the book or similar articles. In short, just because humans are the most sophisticated intelligences of which we yet know, it would be a very callous and unsubstantiated leap to believe that a machine intelligence is likely to share similar traits with humankind's intelligence. If this is unclear to you, I recommend you learn about how computer programs currently work, and how they're likely to improve to the point of becoming superintelligent.
By the way, there are many types of SAI, for example an SAI who's superintelligent portion relates to speed, or strategy, or a few other types.
Well we know there are different kinds in the animal kingdom.
The octopus brain developed completely independently from our brain for instance. The octopus does not learn socially, is able to function independently from birth, and learns everything it does in a few years, as its lifespan is short.
So there must be some major differences due to very different origin and learning styles.
> I find it completely unbelievable that anyone who read Superintelligence could possibly assert "The Argument From My Roommate" with a straight face
Pretty sure that was a joke, and zeroing in on it is a pretty bad violation of the principle of charity. A lot of the other items in the talk (e.g. "like the alchemists, we don't even understand this well enough to have realistic goals" and "counting up all future human lives to justify your budget is a bullshit tactic" and "there's no reason to think an AI that qualifies as superintelligent by some metric will have those sorts of motives anymore") seem to me to be fair and rather important critiques of Bostrom's book. (although I was admittedly already a skeptic on this)
To be honest, this article is hardly written completely with a straight face. It has a cheeky tone throughout. Which isn't to say it doesn't provide interesting points for the layman.
Of course, it does have a cheeky tone, though I think all of my points stand. The "interesting points for a layman" are actually a series of straw-man propaganda arguments. It does not argue in good faith and it should not be afforded the legitimacy of a thoughtful opposing position.
If something is a straw man, you're not going to discover that by looking prima facie.
Like, I could counter this article by saying "this dude thinks evolution has made humans as intelligent as it's possible for anything to be, but there's no reason to think that's so". And prima facie, my argument isn't outlandish. Nevertheless it's a total straw man.