I think the issue is that once do manage to build an AI that matches human capabilities in every domain, it will be trivial to exceed human capabilities. Logic gates can switch millions of times faster than neurons can pulse. The speed of digital signal also means that artificial brains won't be size-limited by signal latency in the same way that human brains are. We will be able to scale them up, optimize the hardware, make them faster, give them more memory, perfect recall.
Nick Bolstrom keeps going on in his book about the singularity, and about how once AI can improve itself it will quickly be way beyond us. I think the truth is that the AI doesn't need to be self-improving at all to vastly exceed human capabilities. If we can build an AI as smart as we are, then we can probably build a thousand times as smart too.
> it will be trivial to exceed human capabilities. Logic gates can switch millions of times faster than neurons
You're equating speed with quality. There's no reason to assume that. Do you think an AI will be better at catching a fieldmouse than a falcon? Do you think the falcon is limited by speed of thought? Many forms of intelligence are limited by game theory, not raw speed. The challenge isn't extracting large quantities of information, it's knowing which information is relevant to your ends. And that knowledge is just as limited by the number of opportunities for interaction as the availability of analytic resources.
Think of it this way: most animals could trivially add more neurons. There's plenty of outliers who got a shot, but bigger brainded individuals obviously hit diminishing returns, otherwise the population would've shifted already.
The previous comment is not confusing speed with quality.
The point is that once we have a machine as smart as us, simply improving its speed and resources will increase its effective intelligence.
Whether a higher/faster intelligence generates additional value in any given task is beside the point. Some tasks don't benefit from increased intelligence, but that doesn't mean being smarter doesn't come with great benefits.
There's also another thing. AI may not need to be superhuman, it may be close-but-not-quite human and yet be more effective than us - simply because we carry a huge baggage of stuff that a mind we build won't have.
Trust me, if I were to be wired directly to the Internet and had some well-defined goals, I'd be much more effective at it than any of us here - possibly any of us here combined. Because as a human, I have to deal with stupid shit like social considerations, random anxiety attacks, the drive to mate, the drive of curiosity, etc. Focus is a powerful force.
What about consciousness or intelligence implies that it would be 'pure' in the sense that you describe? Wouldn't a fully conscious being have a great deal of complexity that might render it equivalent to the roommate example? Couldn't it get offended after crawling the internet and reading that a lot of people didn't like it very much?
The idea that 'intelligence' is somehow an isolatable and trainable property ignores all examples of intelligence that currently exist. Intelligence is complex, multifaceted, and arises primarily as an interdependent phenomena.
It doesn't ignore those examples. The idea pretty much comes from the definition of intelligence used in AI, which (while still messy at times) is more precise than common usage of the world.
In particular, intelligence is a powerful optimization process - it's agent's ability to figure out how to make the world it lives in look more like it wants. Values on the other hand, describe what the agent wants. Hence the orthogonality thesis, which is pretty obvious from this definition. 'idlewords touches on it, but only to try and bash it in a pretty dumb way - the argument is essentially like saying "2D space doesn't exist, because the only piece of paper I saw ever had two dots on it, and those dots were on top of each other".
You could argue that for evolution the orthogonality thesis doesn't hold - that maybe our intelligence is intertwined with our values. But that's because evolution is a dynamic system (a very stupid dynamic system). Thus it doesn't get to explore the whole phase space[0] at will, but follows a trajectory through it. It may be so that all trajectories starting from the initial conditions on our planet end up tightly grouped around human-like intelligence and values. But not being able to randomize your way "out there" doesn't make the phase space itself disappear, nor does it imply that it is inaccessible for us now.
They same could be said of flight, but now we have machines whose sole purpose is flight.
"Pure" is a bit of an extreme word, but clearly designed things are purer to particular goals than biological systems typically are.
This is partly due to the essences of what design means and partly because machines don't have to do all the things biology does, such as maintain a metabolism that continually turns over their structural parts, reproduce, find and process their own fuel, etc.
Not to mention lack of motivation and tiredness. That being said, I'm sure we can make an AI that can think much faster than a human. I also think we can build an AI that can keep more than seven items in its working memory, as humans can.
Just the fact that a machine mind will be able to interface with regular digital algorithms directly will give it huge advantage over us.
Imagine if we had calculus math modules built into our brains. Now add modules for every branch of math and physics, languages, etc. The "dumb" AI and algorithms of today will be the superintelligence mental accelerators of tomorrow.
But making them linearly faster to scale them up doesn't help if the difficulty of the problems they face isn't linear. If it comes to making more clever things, I strongly doubt they are even a remotely small polynomial.
You're equalling human time to AI/computer time. A one day old neural net has already experienced multiple lifetimes worth of images before it is able to beat you at image recognition. It's not trivial, but we just gloss over the extremely complex training phase because it runs on a different clock speed than us.
Nick Bolstrom keeps going on in his book about the singularity, and about how once AI can improve itself it will quickly be way beyond us. I think the truth is that the AI doesn't need to be self-improving at all to vastly exceed human capabilities. If we can build an AI as smart as we are, then we can probably build a thousand times as smart too.