If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs being from aliens, and you didn't know a lot about UFOs and affiliated cults, what would be the smart thing to believe?
Also,
>AI alarmists believe in something called the Orthogonality Thesis. This says that even very complex beings can have simple motivations, like the paper-clip maximizer.
Uh, no. The point of the paper clip maximizer is that it's orthogonal, not that it's simple.
>It's very likely that the scary "paper clip maximizer" would spend all of its time writing poems about paper clips, or getting into flame wars on reddit/r/paperclip, rather than trying to destroy the universe.
You know what can be made into poems about paper clips? Humans. You know what can have better flame wars than humans? Our atoms, rearranged into the ideal paper clip flame war warrior.
>The assumption that any intelligent agent will want to recursively self-improve
That's not really a premise. A better version would be "a likely path to super-intelligence will be a self-improving agent".
>It's like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.
Yudkowsky has argued that more should be invested in research into AI risk. There are tens of billions of dollars being spent on AI R&D, and somewhere in the tens of millions range spent on AI risk research. Even if advocates wanted us to spend hundreds of millions of dollars on risk research a year , that wouldn't make this criticism fair. You have a point that we shouldn't be ignoring other more important things for this, but to argue against increasing spending from 8 figures to 9 figures you need better arguments.
> If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs being from aliens, and you didn't know a lot about UFOs and affiliated cults, what would be the smart thing to believe?
That Elon Musk, Bill Gates and Stephen Hawking etc are all a little nutters when it comes to UFOs and aliens?
I don't know a lot about UFOs and affiliated cults, but I'm going to guess that those mentioned are not UFO experts just like they aren't machine learning experts.
> That's not really a premise. A better version would be "a likely path to super-intelligence will be a self-improving agent".
Excellent point, but that doesn't give a self-improving agent the ability to ignore computational complexity or the uncertainty of chaotic systems.
> That Elon Musk, Bill Gates and Stephen Hawking etc are all a little nutters when it comes to UFOs and aliens?
Yes! Elon Musk believes firmly that we're living in a simulation, and that doesn't make me believe in that theory more, it simply makes me admire Musk less.
Just because someone is or has been extremely successful doesn't mean they're right about everything. Many successful and intelligent people have been very religious: that's a testament to the human mind's complexity and frailty, not to the existence of God...
Madeleine Albright is a strong advocate of Herbalife: that doesn't change my opinion of Herbalife but it does change my opinion of Albright.
That's one. If as many people who have spoken out about AI risk would also "firmly believe we're in a simulation", it would shift my views (I would expect there to be some evidence or very strong arguments for it).
> That's not really a premise. A better version would be "a likely path to super-intelligence will be a self-improving agent".
If I understand your claim, it's not very relevant. You seem to be saying that, given that superintelligence happens, the probability that it will have happened via a self-improving agent is high.
That doesn't refute the claim that "superintelligence probably will not happen".
> You have a point that we shouldn't be ignoring other more important things for this, but to argue against increasing spending from 8 figures to 9 figures you need better arguments.
Why don't we spend that money on non-vague problems that actually face us now? For example, unbiased AI and explainable AI are topics of research that would help us right now and don't require wacky far-future predictions.
> You seem to be saying that, given that superintelligence happens, the probability that it will have happened via a self-improving agent is high.
I'm saying that there's no need to assume that "any intelligent agent will want to recursively self-improve". It's a strawman he deliberately uses so that rejecting it lets him get a dig in at his opponents. Of course not every agent will want to improve that, and nobody relevant ever said that every agent will.
>Why don't we spend that money on non-vague problems that actually face us now? For example, unbiased AI and explainable AI would help us right now and don't require wacky far-future predictions.
I said that in response to his absurd claim that AI risk spending is taking a lot away from other things we could worry about.
But anyway, if you look at what MIRI actually produces (see https://intelligence.org/all-publications/), you'll find many papers that don't depend on "wacky" predictions.
Sure, there are also papers about theorem proving and stuff. I can't fault the GOFAI (Good Old-Fashioned AI) part of what MIRI does. I do believe we need more of that kind of research and it will let us understand current AI better.
But superintelligence and AI risk are made out (by many) to be the most important thing in the world. Basic research isn't that. It's just good. If we fund math and theoretical research, our descendants will be better cognitively equipped to deal with whatever they need to deal with, and that's pretty neat. I see the basic research MIRI does as a special case of that.
Wait a minute. Is the superintelligence hype just a front for convincing people to financially support mathematics and philosophy? Maybe the ends justify the means, then. But too much false urgency will create a backlash where nobody respects the work that was done, like Y2K.
> If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs being from aliens, and you didn't know a lot about UFOs and affiliated cults, what would be the smart thing to believe?
> If Elon Musk, Bill Gates, and Stephen Hawking etc all expressed belief in UFOs being from aliens, and you didn't know a lot about UFOs and affiliated cults, what would be the smart thing to believe?
What's more likely? That all those smart people, who in other domains prove their level-headed thinking and good moral compass (past history of Microsoft notwithstanding), have suddenly suffered brain damage, or that the argument against them indeed is another piece in an on-going series of author bashing the techies?
Also,
>AI alarmists believe in something called the Orthogonality Thesis. This says that even very complex beings can have simple motivations, like the paper-clip maximizer.
Uh, no. The point of the paper clip maximizer is that it's orthogonal, not that it's simple.
>It's very likely that the scary "paper clip maximizer" would spend all of its time writing poems about paper clips, or getting into flame wars on reddit/r/paperclip, rather than trying to destroy the universe.
You know what can be made into poems about paper clips? Humans. You know what can have better flame wars than humans? Our atoms, rearranged into the ideal paper clip flame war warrior.
>The assumption that any intelligent agent will want to recursively self-improve
That's not really a premise. A better version would be "a likely path to super-intelligence will be a self-improving agent".
>It's like if those Alamogordo scientists had decided to completely focus on whether they were going to blow up the atmosphere, and forgot that they were also making nuclear weapons, and had to figure out how to cope with that.
Yudkowsky has argued that more should be invested in research into AI risk. There are tens of billions of dollars being spent on AI R&D, and somewhere in the tens of millions range spent on AI risk research. Even if advocates wanted us to spend hundreds of millions of dollars on risk research a year , that wouldn't make this criticism fair. You have a point that we shouldn't be ignoring other more important things for this, but to argue against increasing spending from 8 figures to 9 figures you need better arguments.