> You seem to be saying that, given that superintelligence happens, the probability that it will have happened via a self-improving agent is high.
I'm saying that there's no need to assume that "any intelligent agent will want to recursively self-improve". It's a strawman he deliberately uses so that rejecting it lets him get a dig in at his opponents. Of course not every agent will want to improve that, and nobody relevant ever said that every agent will.
>Why don't we spend that money on non-vague problems that actually face us now? For example, unbiased AI and explainable AI would help us right now and don't require wacky far-future predictions.
I said that in response to his absurd claim that AI risk spending is taking a lot away from other things we could worry about.
But anyway, if you look at what MIRI actually produces (see https://intelligence.org/all-publications/), you'll find many papers that don't depend on "wacky" predictions.
Sure, there are also papers about theorem proving and stuff. I can't fault the GOFAI (Good Old-Fashioned AI) part of what MIRI does. I do believe we need more of that kind of research and it will let us understand current AI better.
But superintelligence and AI risk are made out (by many) to be the most important thing in the world. Basic research isn't that. It's just good. If we fund math and theoretical research, our descendants will be better cognitively equipped to deal with whatever they need to deal with, and that's pretty neat. I see the basic research MIRI does as a special case of that.
Wait a minute. Is the superintelligence hype just a front for convincing people to financially support mathematics and philosophy? Maybe the ends justify the means, then. But too much false urgency will create a backlash where nobody respects the work that was done, like Y2K.
I'm saying that there's no need to assume that "any intelligent agent will want to recursively self-improve". It's a strawman he deliberately uses so that rejecting it lets him get a dig in at his opponents. Of course not every agent will want to improve that, and nobody relevant ever said that every agent will.
>Why don't we spend that money on non-vague problems that actually face us now? For example, unbiased AI and explainable AI would help us right now and don't require wacky far-future predictions.
I said that in response to his absurd claim that AI risk spending is taking a lot away from other things we could worry about.
But anyway, if you look at what MIRI actually produces (see https://intelligence.org/all-publications/), you'll find many papers that don't depend on "wacky" predictions.