So I'm a LessWronger and know a bit about the "movement", and think you are misunderstanding what "LessWrongers think". Obviously not all LessWrongers think the same thing at all, but I'm talking about the average position of the people who believe AI safety should be worked towards.
I'd love to explain the basic position, and tell me where you disagree with it. This is the basic position:
1. Intelligence can be created, because there is nothing "special"/"magical" about humans, and our intelligence was eventually created.
2. At some point, humanity will create an "artificial general intelligence". (Since we'll just keep improving science and technology, and there's no fundamental reason why this won't eventually allow us to create an intelligence.
3. "Artificial general intelligence" basically means a machine that is capable of achieving its goals, where the goals and methods it uses to achieve them are general. I.e. not "is able to play chess really well", but rather "is able to e.g. cure cancer".
4. For various reasons, once we have an artificial intelligence, it will likely become much smarter than us. (There are many reasons and debates about this, but let's just assume that since it's a computer, we can run it much faster than a human. If you dispute this point, we can talk about it more).
5. Something being much more intelligent than us means that, in effect, it has almost absolute power over what happens in the world (like we are basically all-powerful from the vantage point of monkeys, and their fate is totally in our hands).
6. (This is, I believe, the main point): Something being "intelligent" in the sense we're talking about doesn't say anything about what its goals are, or about how its mind works. We're used to everything that's intelligent being a human being, therefore the way our mind works is basically the same across every human. With an artificial intelligence, it will work completely differently from our mind. So if we "tell it" something like "cure cancer", it won't have our intuition and background knowledge to understand that we mean "but don't turn half the world into a giant computer in order to cure it".
7. Combine the two points above, and you get the large idea - whatever the goals of the AI will be, it will achieve them. Its goals won't, by default, be ones that are good for humanity, if only because we have no idea how to program our "value system" into a computer.
8. Therefore, we need to start working on making sure that when AI does come, it's safe. Even if we create an AI, the "extra" problem of making it safe is both hard, and we have absolutely no idea how to do it right now. We have no idea how long AI will take, or how long figuring out safety will take, but since this is a humanity-threatening problem, we should devote at least some resources to working on it right now.
That's it, that's the basic idea. I'd love to hear which part you disagree with. I totally understand that not everyone will agree on some of the final details like, e.g., how many resources we should effectively devote right now (you might even claim it's 0 because anything we do now won't be useful).
But I think the overall reasoning is sound, and would love to hear an intelligent disagreement.
> 1. Intelligence can be created, because there is nothing "special"/"magical" about humans, and our intelligence was eventually created.
Human intelligence evolved through a (very long!) series of natural processes, to the best of my knowledge. To say it was "created" implies something closer to a religious or philosophical opinion, rather than something supported by science.
> 2. At some point, humanity will create an "artificial general intelligence". (Since we'll just keep improving science and technology, and there's no fundamental reason why this won't eventually allow us to create an intelligence.
This is hugely debatable. Why is AGI inevitable? Even given great amounts of computing resources, a artificial general intelligence does not just automatically appear, it must somehow be designed and programmed. Fields like computer vision have grown tremendously using techniques like deep learning, but there really isn't any evidence that I know of that a general intelligence is any closer than it was 20 years ago.
Totally agree with your first point, I just didn't want to have too many caveats and nitpicking words. If it's not clear, then of course my arugment in no way implies that human intelligence was "created" by an intelligence - it evolved. Poor wording aside, my statement remains the same.
"This is hugely debatable. Why is AGI inevitable? Even given great amounts of computing resources, a artificial general intelligence does not just automatically appear [...]"
Well no one thinks AGI will appear without anyone working on it, but lots and lots of people are working on it now. And since there are huge incentives to create one, the belief is that more people will work on it as time goes on.
"[...] there really isn't any evidence that I know of that a general intelligence is any closer than it was 20 years ago."
Well, in some sense I agree, in that we still have no idea how far off AGI is. If it's going to happen in 10 years, we should definitely prepare now. If it's 500 years away, maybe it's too early to think about it. But since neither of us knows, wouldn't you say it's worth putting some effort to working towards safety?
In another sense though, I disagree with you that we're not any closer to AGI. As you said jsut the sentence before, fields like comptuer vision have advanced tremendously. While this doesn't necessarily mean AGI is closer, it certainly seems that the fields are related, so advancement in one is a sign that advancement in the other is closer.
Yeah, you go off the rails around step 5. "Something being much more intelligent than us means that, in effect, it has almost absolute power over what happens in the world" makes no sense. Since when does intelligence get you power? Are the smartest people you know also in positions of power? Are the most powerful people all highly intelligent?
"whatever the goals of the AI will be, it will achieve them". Dude, if intelligence meant you could achieve your goals, Hacker News would be a much less whiny place.
"Since when does intelligence get you power?" You hit the nail on the head there. Its about I/O. (Just as its about I/O in the original article - garbage in, garbage out). Jaron Lanier makes this point in.
"This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don't have to worry about is the AI algorithm running them, because that's speculative. There isn't an AI algorithm that's good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it's a massive misdirection."
As I've said before, the singularity theorists seem to be somewhere between computer scientists, who think in terms of software, and philosophers, who think in terms of mindware, and they seem to have a tendency to completely forget about hardware.
There seems to be this leap from 'superintelligent AI' to 'omnipotent omniscient deity' which is accepted as inevitable by (what for shorthand here is being called the 'lesswrong' worldview) which seems to ignore the fact that there are limited resources, limited amounts of energy, and limitations imposed by the laws of physics and information, that stand between a superintelligent AI and the ability to actuate changes in the world.
You're not engaging with the claim as it was meant. In context, no human being has ever been "much more intelligent" than me. Not in the same way that I am "much more intelligent" than the monkey von Neumann.
You might decide that this means edanm goes off the rails at step four, instead. But you should at least understand where you disagree.
I'm still not sure you could assume ultimate power and achieve everything you desired if you were the only hacker news reader on a planet of 8 billion monkeys.
> I'm still not sure you could assume ultimate power and achieve everything you desired if you were the only hacker news reader on a planet of 8 billion monkeys.
I would think it relatively easy for a human armed with today's knowledge and a reasonable yet limited resource infrastructure (for comparison to the situation of an unguarded AI) to quite easily engineer the demise of primate competitors in the neighborhood. Set some strategic fires, burn down jungles would be the first step. "Fire" might be a metaphor for some technology that an AI might master that humans don't quite have the hang of yet that can be used against them. For example, a significant portion of Americans seem way too easily manipulated by religion and fear, an AI-generated televangelist or Donald Trump figure might be a frightening thought.
Well "is able to e.g. cure cancer" is not actually very general. Which leads to the problem with 2) whats the economics behind creating a general intelligence when a specific intelligence will get you better results in a given industry. Even then specific intelligence is still going to be subject to the good-enough economic plateau that has killed so many future predictions.
Then the problems with 4 on up really concern the speed with which 4 can feasibly happen. The AI goes FOOM doomsaysers seem to think that we'll end up with an AI which is so horribly inefficient that/and it will be able to rewrite it self to be super duper intelligent without leaving its machine (and won't accidentally nerf itself in the attempt) and then that super duper intelligent computer will trick several industries into building an even more powerful body for itself etc... all of this happening before humans pull the plug. no step of which is has anything beyond speculation to support it.
In a general note the full employment theorems mean that even if general AI is economically incentivized there's still going to be dozens/hundred/thousands of different AIs carving out niches for themselves which, given that the earth/universe has limited resources, handily prevents the paper clip maximizer problem. While the future may not need humans it will still be a diverse future.
1) Define intelligence, knowledge, truth, proof (deductive and inductive)... how do concepts work?, etc. I am not being facetious here. AI is an epistemology problem not a technological one.
2) I agree but we have to solve the problem of induction first but LW/EY are certain that there is no problem of induction. How can one be certain in a Kantian/Popper framework where statements can be proved false but never true?
3a) Here is where we part ways. It is a common assumption that AI implies consciousness but I think that is an unwarranted assumption. Whatever the principles behind intelligence are, we know that consciousness minds have found a way to (implicitly) enact them. It does not follow that consciousness is necessary for intelligence (just the biological manifestation of them) and I think good arguments exist to think that they are not correlatives. If they are correlatives then it will be easier to genetically design better babies, now that evolution is in conscious control, than to start from scratch.
3b) Goals, values, aims, etc. are teleological concepts that apply to living things only because they face the alternative of life or death. Turning off your computer does not kill it in the same sense that a living thing that stops functioning dies forever. 3a) & 3b) diffuses all the scary AI scenarios about AI taking over the world. It does raise the issue of AI in the hands of bad people with evil goals and values, like the dictator of North Korea who now apparently has the H-Bomb. This is the real danger today.
4) I agree. Computer aided intelligence will allow us to accelerate the accumulation of knowledge (and its application to human life) in unimaginable ways. But it will be no more conscious than your (deductive) calculator.
5) Non Sequiturs. Possibly psychological projection of helplessness or hopelessness.
6) As the joke goes, we can always unplug it.
7) Granting your premises then the goal of LW/EY should not be AI but the scientific, rational proof and definition of ethics but their fundamental philosophic premises won't allow it.
8) For me the threat is bad, evil people in possession of powerful technologies.
>Granting your premises then the goal of LW/EY should not be AI but the scientific, rational proof and definition of ethics but their fundamental philosophic premises won't allow it.
That is the goal of MIRI, the organization that EY founded, and is a frequent topic of discussion on LW
•highly reliable agent design: how can we design AI systems that reliably pursue the goals they are given?
•value learning: how can we design learning systems to learn goals that are aligned with human values?
Not exactly what I meant. What are these human values (for humans not robots) and how do you prove they are rational and scientific? Their goal is to design AI that will accept human goals/values without defining a rational basis for those human values.
I'd love to explain the basic position, and tell me where you disagree with it. This is the basic position:
1. Intelligence can be created, because there is nothing "special"/"magical" about humans, and our intelligence was eventually created.
2. At some point, humanity will create an "artificial general intelligence". (Since we'll just keep improving science and technology, and there's no fundamental reason why this won't eventually allow us to create an intelligence.
3. "Artificial general intelligence" basically means a machine that is capable of achieving its goals, where the goals and methods it uses to achieve them are general. I.e. not "is able to play chess really well", but rather "is able to e.g. cure cancer".
4. For various reasons, once we have an artificial intelligence, it will likely become much smarter than us. (There are many reasons and debates about this, but let's just assume that since it's a computer, we can run it much faster than a human. If you dispute this point, we can talk about it more).
5. Something being much more intelligent than us means that, in effect, it has almost absolute power over what happens in the world (like we are basically all-powerful from the vantage point of monkeys, and their fate is totally in our hands).
6. (This is, I believe, the main point): Something being "intelligent" in the sense we're talking about doesn't say anything about what its goals are, or about how its mind works. We're used to everything that's intelligent being a human being, therefore the way our mind works is basically the same across every human. With an artificial intelligence, it will work completely differently from our mind. So if we "tell it" something like "cure cancer", it won't have our intuition and background knowledge to understand that we mean "but don't turn half the world into a giant computer in order to cure it".
7. Combine the two points above, and you get the large idea - whatever the goals of the AI will be, it will achieve them. Its goals won't, by default, be ones that are good for humanity, if only because we have no idea how to program our "value system" into a computer.
8. Therefore, we need to start working on making sure that when AI does come, it's safe. Even if we create an AI, the "extra" problem of making it safe is both hard, and we have absolutely no idea how to do it right now. We have no idea how long AI will take, or how long figuring out safety will take, but since this is a humanity-threatening problem, we should devote at least some resources to working on it right now.
That's it, that's the basic idea. I'd love to hear which part you disagree with. I totally understand that not everyone will agree on some of the final details like, e.g., how many resources we should effectively devote right now (you might even claim it's 0 because anything we do now won't be useful).
But I think the overall reasoning is sound, and would love to hear an intelligent disagreement.