Hacker News new | past | comments | ask | show | jobs | submit login
Should AI Be Open? (slatestarcodex.com)
149 points by apsec112 on Dec 17, 2015 | hide | past | favorite | 161 comments



'If you were to come up with a sort of objective zoological IQ based on amount of evolutionary work required to reach a certain level, complexity of brain structures, etc, you might put nematodes at 1, cows at 90, chimps at 99, homo erectus at 99.9, and modern humans at 100. The difference between 99.9 and 100 is the difference between “frequently eaten by lions” and “has to pass anti-poaching laws to prevent all lions from being wiped out”.'

[EDITED: the intended quote is below. the quote above is the next paragraph of OP, which is only slightly less relevant than the intended one]

'Why should we expect this to happen? Multiple reasons. The first is that it happened before. It took evolution twenty million years to go from cows with sharp horns to hominids with sharp spears; it took only a few tens of thousands of years to go from hominids with sharp spears to moderns with nuclear weapons. Almost all of the practically interesting differences in intelligence occur within a tiny window that you could blink and miss.'

Yudkowsky's position paper on this idea explains this in more detail: http://intelligence.org/files/IEM.pdf


So, here's a random thought on this whole subject of "AI risk".

Bostrom, Yudkowsky, etc. posit that an "artificial super-intelligence" will be many times smarter than humans, and will represent a threat somewhat analogous to an atomic weapon. BUT... consider that the phrase "many times smarter than humans" may not even mean anything. Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be. Or close enough that being "smarter than human" does not represent anything analogous to an atomic bomb.

So this might be an interesting topic for research, or at least for the philosophers: "What's the limit of how 'smart' it's possible to be"? It may be that there's no possible way to determine that (you don't know what you don't know and all that) but if there is, it might be enlightening.


> Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be.

I think Nick Bostrom had the perfect reply to that in Superintelligence: Paths, Dangers, Strategies:

> Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization—a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

It would be extremely strange if we were near the smartest possible minds. Just look at the evidence: Our fastest neurons send signals at 0.0000004c. Our working memory is smaller than a chimp's.[1] We need pencil and paper to do basic arithmetic. These are not attributes of the pinnacle of possible intelligences.

Even if you think it's likely that we are near the smartest possible minds, consider the consequences of being wrong: The AI becomes much smarter than us and potentially destroys everyone and everything we care about. Unless you are supremely confident in humanity's intelligence, you should be concerned about AI risk.

1. https://www.youtube.com/watch?v=zsXP8qeFF6A


There are savants who can do amazing feats of mental arithmetic, yet have severe mental disabilities in other areas. Perhaps there are some fundamental limits and trade-offs involved? We don't know yet whether computers will be able to break through those limits.

The humans who are in charge of our current society by virtue of having the most wealth, political power, or popularity aren't necessarily the smartest or quickest thinkers, at least not in the way that most AGI researchers seem to be targeting. So even if someone manages to build a real AGI there's no reason to expect it will end up ruling us.


>There are savants who can do amazing feats of mental arithmetic, yet have severe mental disabilities in other areas.

There are also geniuses who can do amazing feats of mental arithmetic and have no severe mental disabilities in any other areas. There are people who have perfect memories (and usually, they live totally normal lives but for the inability to forget a single event). There are people who are far better than average at recognizing faces, pattern recognition, and other tasks that are non-conscious.

The notion that there are "fundamental limits" that are for some reason near the typical human is the cognitive bias of the just-world fallacy. The world is not just. Bad things happen to good people. Good things happen to bad people. If there is a fundamental limit to how smart a mind can get, it's very, very far above the typical human, because there are exceptional humans that are that far away and there's no reason to think a computer couldn't beat them either. Deep Blue beat Kasparov.

>The humans who are in charge of our current society by virtue of having the most wealth, political power, or popularity aren't necessarily the smartest or quickest thinkers, at least not in the way that most AGI researchers seem to be targeting.

I'm not at all familiar with AGI research, but while there are people who start out ahead, there's no reason to think that you can't actually play the game and win it. Winning challenges of acquiring wealth or political power or popularity are related to intelligence, insofar as the concept of intelligence is defined (as it is at least in psychology) as the ability to accomplish general goals.

Walking into a meeting of the Deutsche Arbeiterpartei, joining as the 55th member, and later seizing control of the state requires intelligence. Landing in a dingy yacht with 81 men, bleeding a regime to death, and ruling it until your death requires intelligence. Buying the rights to the Quick and Dirty Operating System, licensing it to IBM, and becoming the de facto standard OS for all consumer computing requires intelligence. Presenting yourself as a folksy Texan when you're a private-school elite from Connecticut and convincing the electorate that you'd have a beer with them requires intelligence. All of these outcomes are goals, and accomplishing them demonstrates an ability to actually put the rubber to the road.

You're confusing the technical term of intelligence, which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence" which means something along the lines of "ability to impress people." Intelligence is not always impressive, and humans have a fantastic ability to write off the accomplishments of others when they want to make the world appear more just. I mean, nobody feels good about the fact that we're one Napoleon away from a totally different world system that may or may not suit our interests. The belief that the world is somehow fundamentally different to the extent that the next Napoleon-class intelligence who isn't content managing a hedge fund and just increasing an int in a bank database can't actually redraw the world map, is just an illusion we tell ourselves to make the world we live in more fair, the stories we live out more meaningful, and the events of our lives have a little bit more importance.

AI could change things very, very drastically. It's right to be afraid.


> There are also geniuses who can do amazing feats of mental arithmetic and have no severe mental disabilities in any other areas.

And do they seem to hold any sort of significant power? It seems that intelligence is not a very good means for achieving power. Charm seems much more effective, and therefore dangerous. I'd be afraid of super-human charm much more than super-human intelligence.

Mental disabilities aside, very high IQ seems to be correlated with relatively low charm and low ability to solve particular problems (how to get people to do what you want) that are far more dangerous than the kind of problem-solving intelligence (or how we commonly define it) is capable of solving. "Super" intelligent people are terrible problem-solvers when the problems involve other humans.

Ironically, the fact that people like me view the AI-scare as a religious apocalypse that is as threatening as any other religious apocalypse implies one of two things: 1) that the people promoting the reality of this apocalypse are not as intelligent as they believe themselves to be (a real possibility given their limited understanding of both intelligence and our real achievements in the field of AI) and/or that 2) intelligent people are terrible at convincing others, and so don't pose much of a risk.

Either possiblity shows that super-human AI is a non-issue, certainly not at this point in time. As someone said (I don't remember who), we might as well worry about over-population on mars.

What's worse is that machine learning poses other, much more serious and much more imminent threats than super-human intelligence, such as learned biases, which are just one example of conservative feedback-loops (the more we rely on data and shape our actions accordingly, the more the present dynamics reflected in the data ensure that they don't change).


>It seems that intelligence is not a very good means for achieving power. Charm seems much more effective, and therefore dangerous. I'd be afraid of super-human charm much more than super-human intelligence.

See these paragraphs in the post to which you replied:

>Walking into a meeting of the Deutsche Arbeiterpartei, joining as the 55th member, and later seizing control of the state requires intelligence. Landing in a dingy yacht with 81 men, bleeding a regime to death, and ruling it until your death requires intelligence. Buying the rights to the Quick and Dirty Operating System, licensing it to IBM, and becoming the de facto standard OS for all consumer computing requires intelligence. Presenting yourself as a folksy Texan when you're a private-school elite from Connecticut and convincing the electorate that you'd have a beer with them requires intelligence. All of these outcomes are goals, and accomplishing them demonstrates an ability to actually put the rubber to the road.

>You're confusing the technical term of intelligence, which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence" which means something along the lines of "ability to impress people." Intelligence is not always impressive, and humans have a fantastic ability to write off the accomplishments of others when they want to make the world appear more just. I mean, nobody feels good about the fact that we're one Napoleon away from a totally different world system that may or may not suit our interests. The belief that the world is somehow fundamentally different to the extent that the next Napoleon-class intelligence who isn't content managing a hedge fund and just increasing an int in a bank database can't actually redraw the world map, is just an illusion we tell ourselves to make the world we live in more fair, the stories we live out more meaningful, and the events of our lives have a little bit more importance.

>Ironically, the fact that people like me view the AI-scare as a religious apocalypse that is as threatening as any other religious apocalypse implies one of two things:

Some alternative explanations:

3. You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"

4. You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm

5. Your view of the world is factually incorrect, I mean you believe things like:

>Mental disabilities aside, very high IQ seems to be correlated with relatively low charm and low ability to solve particular problems (how to get people to do what you want) that are far more dangerous than the kind of problem-solving intelligence (or how we commonly define it) is capable of solving. "Super" intelligent people are terrible problem-solvers when the problems involve other humans.

Let's assume that IQ is a good proxy for intelligence (it isn't): what IQ do you think Bill Gates or Napoleon or Warren Buffet or Karl Rove have? What IQ do you think Steve Jobs or Steve Ballmer had/have? Do you think they're just "average" or just not "very high"?

This:

>very high IQ seems to be correlated with relatively low charm

is again the just world fallacy! There is no law of the universe that makes people very good at abstract problem solving bad at social situations. In fact, most hyper-successful people are almost certainly good at both.

And that ignores the fact that cognitive biases DO exist, and it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing. Do you think it takes some super-special never-going-to-be-replicated feat of non-Turing-computable human thought to write Zynga games?

It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.


> which means something along the lines of "ability to accomplish goals," with the social construct of "intelligence"

Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.

> You're wrong, and super-human AI is a massive issue, regardless of how you pattern-match it as "religious"

If you think I don't always presume that everything I say is likely wrong then you misunderstand me. I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.

> You're not creative enough (one might dare say "intelligent" but that would be too snarky) to imagine all the ways in which an AI could devastate humanity without it having much intelligence or much charm

I can imagine many things. I can even imagine an alien race destroying our civilization tomorrow. What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.

> In fact, most hyper-successful people are almost certainly good at both.

I would gladly debate this issue if I believed you genuinely believed that. If you had a list ordered by social power of the top 100 most powerful people in the world, I doubt you would say their defining quality is intelligence.

> it's very possible to apply the scientific method and empirical problem solving to finding them, and then exploiting humans that way. This is a huge subfield of psychology (persuasion) and the basis of marketing.

Psychology is one of the fields I know most about, and I can tell you that the people most adept at exploiting others are not the ones you would call super-intelligent. You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.

> It's nice to think the world is a safe place, but the reality is that our social order is increasingly precarious and an AI could easily disrupt that.

There are so many things that could disrupt that, and while AI is one of them, it is not among the top ten.


>Perhaps. But if that is the case, the people who are most intelligent by this definition are far from the ones recognized as intelligent by the AI-fearing community. Let me put it this way: Albert Einstein and Richard Feynman would not be among them. Adolf Hitler, on the other hand, would be a genius.

How so? Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).

And yes, see my original comment re: it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.

>I do, however, don't understand what you mean by "a massive issue". Do you mean imminent danger? Yes, I guess it's possible, but being familiar with the state of the art, I can at least discount the "imminent" part.

The state of the art is irrelevant here; in particular, most of AI seems to be moving in the direction of "use computers to emulate human neural hardware and use massive amounts of training data to compensate for the relative sparseness of the artificial neural networks."

What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI. This is how most innovation happens, but here it could be very dangerous, because...

>What I fail to see is compelling arguments why AI is any more dangerous or any more imminent than hundreds of bigger, more imminent threats.

AI could totally destabilize our society in a matter of hours. Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that, or incidentally caused it to happen. An AI might not be able to launch nukes directly (in the US at least, who knows what the Russians have hooked up to computers), but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack. There actually are places that will just make molecules you send them, so if the AI figures out protein folding, it could wipe out humanity with a virus.

AI is more dangerous than most things, because it has:

* limitless capability for action

* near instantaneous ability to act

The second one is really key; there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.

If you have a list of hundreds of bigger, more imminent threats, that can take humanity from 2015 to 20000BCE in a day, I'd like to see it.

>I doubt you would say their defining quality is intelligence.

I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals" and then say "people who have chosen to become politically powerful and accomplished that goal must not be people you consider intelligent."

>You wouldn't say they are of average intelligence, but I don't think you'd recognize their intelligence as being superior.

Well, they can exploit people. How's that for superiority?

My background is admittedly in cognitive psychology, not clinical, but I do see your point here. I'd like to make two distinctions:

* A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it

* People that are most adept at manipulating people, usually are that way because that's the main skill they've trained themselves for over the course of their lives.

>it is not among the top ten.

Of the top ten, what would take less than a week to totally destroy our current civilization?


> Feynman in particular was quite able to continually accomplish his goals, and he purposely chose divergent goals to test himself (his whole "I'll be a biologist this summer" thing).

His goals pertained to himself. He never influenced the masses and never amassed much power.

> it takes intelligence to walk into the DAP meeting and join as member 55 and come out conquering mainland Europe.

I didn't say it doesn't, but it doesn't take super intelligence to do that. Just more than a baseline. Hitler was no genius.

> What's imminently dangerous about AI is that all it really takes is a few innovations that might be in seemingly unrelated areas enable probably several people who see the pattern to go and implement AI.

That could be said just about anything. A psychologist could accidentally discover a fool-proof mechanism of brainwashing people; a microbiologist could discover an un-killable deadly microbe; an archeologist could uncover a dormant spaceship from a hostile civilization. There's nothing that shows that such breakthroughs in AI are any more imminent than in other fields.

> Our infrastructure is barely secure against human attackers, and it could be totally obliterated by an AI that chose to do that

Why?

> but it could almost certainly make it seem to any nuclear power that another nuclear power had launched a nuclear attack

Why can an AI do that but a human can't?

> limitless capability for action

God has limitless capability for action. But we have no reason whatsoever to believe that either God or true AI would reveal themselves in the near future.

> near instantaneous ability to act

No. Again,

> there's nearly nothing that would make shit hit the fan FASTER than a hostile AI.

There's nothing that would make shit hit the fan FASTER than a hostile spaceworm devouring the planet. But both the spaceworm and the AI are currently speculative sci-fi.

> I'm confused as to how you can read three comments of "intelligence is the ability to accomplish goals"

There are a couple of problems with that: one, that is not the definition that is commonly used today. Britney Spears has a lot of ability to achieve her goals, but no one would classify her as especially intelligent. Two, that is not where AI research is going. No one is trying to make computers able to "achieve goals", but able to carry out certain computations. Those computations are very loosely correlated with actual ability to achieve goals. You could define intelligence as "the ability to kill the world with a thought" and then say AI is awfully dangerous, but that definition alone won't change AI's actual capabilities.

> A generally intelligent person (say, Feynman) could learn to manipulate people and would almost certainly be successful at it

I disagree. We have no data to support that prediction. We know that manipulation requires intelligence, but we do not know that added intelligence translates to added ability to manipulate and that that relationship scales.

> what would take less than a week to totally destroy our current civilization?

That is a strange question, because you have no idea how long it would take an AI. I would say that whatever an AI could achieve in a week, humans could achieve in a similar timeframe and much sooner. In any case, as someone who worked with neural-networks in the nineties, I can tell you that we haven't made as much progress as you think. We are certainly not at any point where a sudden discovery could yield true AI any more than a sudden discovery would create an unkillable virus.


The AI becomes much smarter than us and potentially destroys everyone and everything we care about. Unless you are supremely confident in humanity's intelligence, you should be concerned about AI risk.

I acknowledge it as a real risk, but it's not terribly high on my personal list of things to worry about right now. I like what Andrew Ng said about how worrying about this now is "like worrying about over-population on Mars".


Please notice that your reply is a different argument than the one you first put forth. Originally, you weren't worried about AI because you thought it could never –even in principle– vastly exceed human abilities. Now you're basically saying, "I don't need to worry because it won't happen for a long time." That is a huge amount of ground to cede.

I'm not so confident that human-level AI will take a long time. The timeline depends on algorithmic insights, which are notoriously difficult to predict. It could be a century. It could be a decade. Still, it seems like something worth worrying about.


Please notice that your reply is a different argument than the one you first put forth. Originally, you weren't worried about AI because you thought it could never –even in principle– vastly exceed human abilities.

I never said I wasn't worried about AI. You're extrapolating from what I did say; which I've said all along was just a thought experiment, not a position I'm actually arguing for.


I really recommend you read Bostrom, he really does succinctly argue the relevant positions, if a bit drily.

It's one of those books that put the arguments so clearly you're suddenly catapulted to having a vastly superior knowledge of the subject than someone trying to do simple thought experiments.

Both of your arguments look out-dated if you're one of the people 'in the know'.

Also, I suggest looking a bit more into what's going on in machine learning, it's suddenly got far more sophisticated than I personally realized until a couple of months ago when I was chatting to someone developing in it at the moment.


I actually have the Bostrom book, but just haven't found time to read it yet. But it is definitely in the queue!


Or like worrying about global warming back when it would have been easier to prevent?

Ng's statement is, at best, equivalent to a student who is putting off starting their semester project until finals week. Yes it seems far away, but the future is going to happen.


I don't know. I mean, we don't seem to even be close to actually beginning to colonize Mars, much less be close to the point of overpopulation. I think Ng's statement, formed in an analogy similar to yours, would be closer to

"a freshman student who is putting off studying for his Senior final project until his Senior year".

The question Ng asked was something like "is there any practical action we can take today, to address over-population on mars" as an analogy to "is there any practical step we can take today to address the danger of a super-AGI". And honestly, I'm not convinced there is anything practical to do about super-AGI today. Well, nothing besides pursuing the "open AI" strategy.

But I'm willing to be convinced otherwise if somebody has a good argument.


Did you have thoughts on the FLI research priorities document?

http://futureoflife.org/data/documents/research_priorities.p...


I wasn't familiar with that until now, but I'll definitely give it a look.


> The AI becomes much smarter than us and potentially destroys everyone and everything we care about.

What makes you think we humans won't attempt to do even more harm towards humanity? Maybe the AI will save us from ourselves, and, being so much smarter, might guide us towards our further evolution.


I think most people didn't really understand the meaning of your comment. They seem to all equate intelligence and processing speed.

I think it's legitimately an interesting question. As in, it could be something like Turing completeness. All Turing complete languages are capable of computing the same things, some are just faster. Maybe there's nothing beyond our level of understanding, just a more accelerated and accurate version of it. An AI will think on the same level as us, just faster. In that case, in that hypothetical, an AI 100x faster than a person is not much better than 100 people. It won't forget things (that's an assumption, actually), it's neuron firing or equivalent would be faster, but maybe it won't really be capable of anything fundamentally different than people.

This is not the same as the difference between chimps and humans. We are fundamentally on another level. A chimp, or even a million chimps, can never accomplish what a person can. They will not discover abstract math, write a book, speak a language.

Mind you, I suspect this is not the case. I suspect that a super intelligent AI will be able to think of things we can never hope to accomplish.

But it is an interesting question that I think is worth thinking about, rather than inanely down voting the idea.


> They seem to all equate intelligence and processing speed.

It's helpful to remind people that every human brain that exists runs at the same processing speed†, yet we have greatly varying intelligence between us. (Also, IQ is an index, but people confuse it for a linear measure; human geniuses may be doing things that require many times the "intelligence" of the average person.)

† Okay, I lied: there are some people whose neurons have abnormal firing rates; the visible result is Parkinsonism.


Even if that is the case, a person with human-level intelligence, but with unlimited memory, ability to visualize, internet connection, no need to sleep, and thinking 100 times faster than a normal person would quickly become pretty much God to us.


I agree, I suspect at this time there's more room for progress to be made by augmenting human intellect than AI.

Think about how long it takes you to imagine a program vs actually coding it, or imagining an object you want to create and actually building it, there are at least two or three orders of magnitude for improvement over the keyboard.


As in, it could be something like Turing completeness. All Turing complete languages are capable of computing the same things, some are just faster. Maybe there's nothing beyond our level of understanding, just a more accelerated and accurate version of it. An AI will think on the same level as us, just faster. In that case, in that hypothetical, an AI 100x faster than a person is not much better than 100 people. It won't forget things (that's an assumption, actually), it's neuron firing or equivalent would be faster, but maybe it won't really be capable of anything fundamentally different than people.

Yeah, that's another really good way of putting it. That's probably closer to what I meant, than what I said above.


If we accept:

- that there's only one class of "humanic" intelligence;

- that we can approximately represent instances in this class of intelligence as vectors of {memory, learning speed, computation speed, communication speed};

- that any AI that could be created is merely a vector in this n-dimensional intelligence space, lacking any extra-intelligent qualities;

- that productivity and achievement increases exponentially the more intelligent being you devote to a problem, but logarithmically for more beings you devote (e.g. a being with intelligence vector {10,10,10,10} might be as productive as 10000 {1,1,1,1} beings);

then this doesn't exclude the possibility of us creating an AI with an intelligence vector twice an average human's intelligence vector, which can suggest improvements to its algorithms and datacenter and chip designs to become 10x as intelligent as a human, and from there it could quickly determine new algorithms, and eventually it's considering philosophy (and what to do about these humans).

The point is: viewing intelligence the way you suggest doesn't help us on what to do about "super" artificial intelligence.


If you look at the history of human evolution, this doesn't make sense. Evolution was very very slowly increasing human intelligence by, e.g., making our skulls bigger. Then we got to the point where we could use language/transmit knowledge between generations/etc. and got up to technological, spacefaring civilization in, evolutionarily speaking, no time whatsoever. This is not a story which suggests that human intelligence is some sort of maximum, or that evolution was running into diminishing returns and so stopped at human intelligence. It suggests that human intelligence is the minimum intelligence necessary to produce the kind of generational transfer that gets you up to technological civilization.


It's not like humans sit around all day thinking. We build tools to do the thinking for us.

Humans ourselves are very bad at predicting the weather. So we build supercomputers, and weather models, and now we're a bit better at it.

Now the question is: given the same budget, would a super-intelligent AI predict the weather in a substantially different way to humans? I cannot see how, but maybe that is my human stupidity talking.


This is a pretty random domain to pick. We don't know whether it's even possible to predict the weather better than we are. There are limits to that kind of prediction because weather is a chaotic system.

However if it is possible to improve weather simulations, and if there are advancements to be made in meteorology, then it's very likely an AI would be able to find them.


Humans are very bad at self improving. Imagine how smart we'd be now if our intelligence had increased at the same rate that computing power increases. Now imagine how fast computing power would've increased if we'd been that smart. Now imagine...


Close to the limit of how smart it's possible to be? Don't be silly. The human brain is limited by its slow speed, by the amount of cortical mass you can fit inside a human skull, and by the length of human lifetimes. Computers will not have any of those limitations.

In terms of speed: if you could build the exact silicon equivalent of a human brain, you may be able to run it several orders of magnitude faster, simply because it wouldn't be limited by the slow speeds of electrochemical processes in the human brain. Nerve impulses travel at speeds measured in meters per second. Neurons also need time to recharge between spike bursts, and they can physically damage themselves if they get too excited.

In terms of volume: much of our intelligence is in perceiving patterns. That's limited by cortical mass. Pattern recognition is what all these "deep learning" systems excel at. The more depth they add, the better they get. Having deeper pattern recognizers, or simply having more of them, means you can see more patterns, more complex patterns, etc. Things that might be beyond the reach of any human.

Then, in terms of data, machine have an advantage too. We're limited by our short lifetimes. How many people are expert musicians, genius mathematicians, rockstar programmers and great cooks? Very few. There's only so many hours in a day, and we only live so long. A machine could learn all those skills, and more. It could excel at everything. Speak every language, master every skill, be aware of so many more facts.

And finally, I posit that maybe we, humans, are limited in our ability to grasp complex conceptual relationships. If you think about it, the average person can fit 7-8 items in their short-term memory, in their brain's "registers", so to speak. That probably limits our ability to reason by analogy. We can go "A is to B what C is to D", but maybe more complex relationships with 50 variables and several logical connectives will seem "intuitive" to a machine that can manipulate 200 items in its short-term memory.


The human brain is limited by its slow speed, by the amount of cortical mass you can fit inside a human skull, and by the length of human lifetimes. Computers will not have any of those limitations.

Right, but the idea I'm playing around with is this: Suppose you had a hypothetical creature with a brain 20x as fast as the human brain and with twice the volume. How much smarter would that creature be in practice. It's kind of an abstract idea (and I probably don't fully understand it myself), but I'm getting at something like "is there a point where that additional raw computing power just doesn't buy you anything meaningful" at least in terms of "does it represent an existential threat?" or "does the nuclear analogy hold?"


You could try looking at existing animal neural counts: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_n... Doublings or 20xing get you a long way: a 1/20x cerebral cortex neural count decrease takes you from human to... horse. I like horses a lot, but there's clearly a very wide gulf in capabilities, and I don't like the thought of ever encountering someone who is to us as we are to horses.


I don't know. If you dropped a human infant into a den of bears (more or less the equivalent situation), I don't think it would be the bears who would be at a disadvantage. So even if we were able to create an AI as far above us as we are above bears (a pretty huge if), it hardly seems certain that it would suddenly (or ever) dominate us.


But we do dominate bears. They continue to exist only at our sufferance; we tolerate them for the most part (though we kill them quickly if they ever threaten us), but we could wipe them out easily if we wanted to. We probably won't do it deliberately, but we drive a number of species to extinction if they're in the way of resource extraction by us - there are estimates that 20% of extant species will go extinct as a result of human activity, and that's with us deliberately trying not to cause extinctions!

(There are more technical arguments that an AI's values would be unlikely to be the complex mismash that human values are, so such an AI would be very unlikely to share our sentimental desire to not make species extinct)


But that's the point. A human society dominates bears. But a solitary human, raised by bears, wouldn't dominate them. So to assume that a single that a solitary AI, "raised" by humans, would somehow be able to conquer us a pretty problematic assumption (on top of a string of other pretty problematic assumptions).


An AI would likely be able to scale and/or copy itself effortlessly. A hundred clones of the same person absolutely would dominate bears, even if they'd been raised by them.


Think of it like Amdahl's law vs Gustafon's law. Maybe a field like calculus is a closed problem- there's not much more to solve there. But a computer can discover new theorems and proofs that would take a human two-three decades to get to the point of discovering them. Consider that a computer doesn't just have the ability to do what humans do faster, but has the ability to solve problems beyond the scale of human.


Even if human intelligence was the pinnacle, AI could be still extremely dangerous just by running at accelerated simulation speed and using huge amounts of subjective time to invent faster hardware. See https://intelligence.org/files/IEM.pdf for discussion. The point is moot anyway though, since the hypothesis (that humans are the most intelligent possible) is just severely incompatible with our current understanding of science.


This comes with the assumption that the first AI will be able to run at better than human levels on something like a raspbery pi. More realistically, it's going to have to run on an immense super computer, and even if it can marginally improve (marginal because a human level AI is no more likely to improve its own hardware than I am), it won't be able to just spread all over the world. It needs real physical hardware.

That is, unless the reason we don't have AI is because we haven't put the pieces together correctly, and it really could work on minimal hardware.


Just because it requires real hardware does not mean it may be able to spread all over the world. It may just do it by setting up virtual machines in data centers all over the world, or as a botnet with a distrusted processing approach similar to folding at home.


Even if that's true, imagine an AI as smart as John von Neumann on modafinil, that never thinks about food/sex/art/etc., that never sleeps, that has access to Wikipedia etc. at the speed of thought, and no morals. That's not uncontrolled intelligence explosion level disaster, but it's still highly dangerous.


But maybe it think about art a lot. Or maybe we can't cut out sleep without harming our ability to reason. Or maybe the AI would be more interested in watching youtube videos than working in the field of AI.

The only human level general intelligence we have to look at is humans. Sure, it's possible that there's an easy way to build an artificial intelligence that doesn't need to sleep and isn't interested in art but that retains our ability to problem solve; but maybe their isn't. It's possible that we'll be able to easily figure out that alternative brain architecture and build in the near future; but given how little we still don't understand about our own brain (and the failure to mimic the brains of even simple creatures), it doesn't seem likely. It's possible that if we were able to overcome those major hurdles, a super John von Neumann would be able to wield enormous political power; or it's possible that, like the real von Neumann, it would focus mostly on its research.

In order to get to the AI doomsday scenario, you have to assume that a lot of very unlikely things are all going to happen. And the main argument for them happening seems to be one of ignorance - "hey, we don't have any idea what this will be like, so we can't say that this improbable situation won't happen."


I'd say your ideas are more unlikely. You can't generalize from one example. An AI that hasn't gone though evolution is extremely unlikely to have the exact set of complex wants and needs that humans have.

And even if you're right, as Alexander says, in that case why the need to open-source it? If you think it's important that AI is open-source and not controlled by e.g. Facebook, that must be because you think AI is going to be powerful and effective. In which case it's as dangerous as he says.


But it's not a generalization from one example; I'm giving a string of different possibilities, based on the fact that we only have a single point of data. People seem to readily except that particular scenario - the "AI as smart as John von Neumann on modafinil, that never thinks about food/sex/art/etc., that never sleeps, that has access to Wikipedia etc. at the speed of thought, and no morals" scenario - based on zero evidence that an AI would actually be like that.

People seem more ready to accept the idea that an artificial general intelligence would act similar to it's portrayed in SciFi stories than accept that it might act similar to the other general intelligences we can observe (namely ourselves).


But all of your possibilities are ridiculously human-parochial. And they all boil down to "the AI might have diverse interests", which is very unlikely - we have them as a result of evolution, but an AI created by human programmers would very likely have only one driving interest. And whether that interest was food, sex, art, or some particular notion of morality, the result would be equally terrible.


A virtual campus with thousands of hyperfocused John Von Neumanns collaborating telepathically and accomplish subjective years of works in seconds of real world time.


The assumption being that these John Von Neumanns will only require cheap, modest hardware to run on.


If we have a Von Neumann level intelligence running on a supercomputer, and we haven't solved friendliness by then, we've lost the future of humanity. As far as I can tell, all arguments against that conclusion are based on various kinds of wishful thinking.


Yo, dawg, I heard ya like Von Neumann architecture.


And still I doubt that AI could talk its way out of a prison.

It could be dangerous though.


Someone tried this experiment, with a human playing the "AI" and another human playing the "guard". The "guard" let the "AI" out.

http://www.yudkowsky.net/singularity/aibox/


And yet Big Yud refuses to publish the conversation. I know his arguments (unknown unknowns) but this is a very un-scientific approach and frankly why should we believe that what he said happened really happened?


"And yet Big Yud refuses"

Please don't try to argue by name-calling. A strong argument is stronger without it.

"DH0. Name-calling.

This is the lowest form of disagreement, and probably also the most common. We've all seen comments like this:

    u r a fag!!!!!!!!!! 
But it's important to realize that more articulate name-calling has just as little weight. A comment like

    The author is a self-important dilettante. 
is really nothing more than a pretentious version of "u r a fag."" - http://www.paulgraham.com/disagree.html


That didn't seem like name-calling to me.


Exactly. The lack of actual information is a reason to disbelieve Yudkowsky on this topic, not to assume that it's true for Secret Technophile Mystery Cult Reasons.


I think it's probably true and that the secrecy is more about cult building.


Could be wrong, but I believe in most cases the conversations were with users of the SL4 mailing list, and at least one user posted a PGP signed statement to the effect that the AI was let out of the box.


Bet he offered the subject 500 dollars real money.. That wouldn't be cheating either. Any decent ai would think of the same thing. And it would explain the secrecy.


The page I linked has him explicitly denying doing that.

"The AI party may not offer any real-world considerations to persuade the Gatekeeper party. For example, the AI party may not offer to pay the Gatekeeper party $100 after the test if the Gatekeeper frees the AI... nor get someone else to do it, et cetera. The AI may offer the Gatekeeper the moon and the stars on a diamond chain, but the human simulating the AI can't offer anything to the human simulating the Gatekeeper. The AI party also can't hire a real-world gang of thugs to threaten the Gatekeeper party into submission. These are creative solutions but it's not what's being tested. No real-world material stakes should be involved except for the handicap (the amount paid by the AI party to the Gatekeeper party in the event the Gatekeeper decides not to let the AI out). (...) In case you were wondering, I (Yudkowsky) obeyed this protocol voluntarily in both earlier tests."


Yes, I'm aware of that. But without 3rd party we're basically trusting the participants that it didn't happen.

E.g. "I'll give you $500, but you also have too sign an NDA, so that people don't know we cheated."

I don't want to imply that they cheated, just want to reiterate my original argument that the lack of transparency makes the experiment effectively invalid. Think Tesla and his claims about World Wireless system.


I didn't notice that! If that's what he says then I'm willing to believe that they played by those rules.


We have a signed admission from the other side. We have one published conversation log (not from Yudkowsky).


See other comments. Signed admission doesn't mean anything without 3rd party verification.

I've never seen a published log. Link?



Well, if it's possible to build a human level intelligence, it's probably possible to build an intelligence that's much like a very smart human except it runs 100x faster. And in that case, somebody with sufficient resources could build an ensemble of 1000 superfast intelligences.

That's a lower bound on the scariness of AI explosion, and it may already be enough to take over the world. Certainly it should be enough to take over the Internet circa 2015...

To my mind it seems pretty clear that if AI exists, then scary AI is not far off.

That said, I don't worry about this stuff too much, because I see AI as being much technically harder, and much less likely to materialize in our lifetimes, than articles like this suppose.


I think the more relevant fact is that we don't have any ethical objections to shutting down computers, they're wholly dependent on our infrastructure, and the'll only evolve in ways that prove useful to us, because we wouldn't put a computer in charge of everything unless it were sufficiently compliant to our desires.

I mean, are you going to put the same machine in charge of mineral extraction, weapon construction, transportation, and weapon deployment? When it hasn't proven to act correctly in a high-fidelity simulated environment? Probably not.

We're also assuming that human ethics and intelligence are independent. I don't see many reasons to believe this. Social power and intelligence might be independent.


it doesn't work like that. you get into trouble when there is a trade-off to be made between switching off the AI and leaving it on.


I think one of the best evidences we have is the level at which computers outperform the human mind in certain domains. An artificial general intelligence would have very low latency access to all these mathematical and computational tools we've invented (physical simulations, databases, theorem provers), and it would not need to mechanically enter program code on a keyboard, but it would be directly wired to the compiler. It could possibly learn to think in program code and execute it on the fly.

The computation environment of neurons is also extremely noisy (axons are not well insulated) and neurons only fire at 7-200Hz. Assuming noise and low firing rate do not fulfill a certain task in mammalian brains, this is another way in which silicon-based minds could potentially be vastly superior.

Thirdly, assuming sleep is not necessary for intelligence, artificial minds would never get exhausted. They could work 24 hours on a problem a day, which is possibly 5-10 time the amount of thinking time a human can do realistically.

And lastly, an AI could easily make copies of itself. Doing so it could branch a certain problem to many computers which run copies of it and eventually collect the best result, or just shorten the time it takes to get a result. It could also evolve at a much faster rate than humans, assuming it has a genetic description: possibly hours to seconds instead of 20 years. Anyhow, it could easily perform experiments with slightly changed version of itself.


Alternately, perhaps it is possible to be much smarter, but it's not as effective as we expect?

If we think of intelligence as skill at solving problems, it might be that there are not many problems that are easily solved with increased intelligence alone, because the solutions we have are nearly optimal, or because other inputs are needed as well.

This seems most likely to happen with mathematical proofs and scientific laws. Increased intelligence doesn't let you prove something that's false, and it doesn't let you violate scientific laws.

But I don't find this particularly plausible. Consider computer security: hackers are finding new exploits all the time. We're far from fixing all the loopholes that could possibly be found.


Alternately, perhaps it is possible to be much smarter, but it's not as effective as we expect? If we think of intelligence as skill at solving problems, it might be that there are not many problems that are easily solved with increased intelligence alone, because the solutions we have are nearly optimal, or because other inputs are needed as well.

Yeah, I think that's a much better way of putting what I'm trying to get at. That is, maybe the machine is much "smarter" depending on how you define "smart". But maybe that doesn't enable it to do much more than we can do, because of other fundamental limits.


That was the way I have been looking at it.

It is not just an issue of finding solutions to impossible problems, or breaking scientific laws, but also problems where the solutions are along the lines of what George Soros would call reflexive. Computer security is like this, so is securities trading (no pun.)

Secondly, what about problems which require the destruction of the problem solvers along the path to the optimal solution? I'm not sure about the correct word to describe this, or the best example but it could be seen in large systems. Where humans are right now is a result of this. We would not know many things if those things which came before were not destroyed (cities, 1000 year empires, etc.)

Thirdly, is a uniform, singular AI the most optimal agent to solve these sorts of problems? Much the way we don't rely or use mainframes for computing today, perhaps there will be many AI agents each which may be really good at solving particular narrowly defined problem sets. This could be described perhaps as a swarm AI.

Nick Bostrom's Superintelligence is a great book, but I don't recall much consideration along these lines. When a lot of AI agents are "live" the paths to solutions where AI compete against each other open up even more complex scenarios.

There certainly are physical limitations to AI. Things like the speed of light can slow down processing. Consumption of energy. Physical elements that can be assembled for computational purposes.

Between now and "super" AI, even really good AI could struggle to find solutions to the most difficult problems, especially if those are problems other AI are creating. The speed alone may be the largest challenge to humans. How do we measure this difficulty relative to human capabilities, I don't know.

End of rant -- but the limits of not just AI but problem solving is quite interesting.


Also, a big part of human intelligence is its communal nature. A human society might kill off all of the wolves in an area. But does anyone think that a single human, raised by wolves, is going to do the same? It might be able to do certain things other animals in the area couldn't, but we wouldn't expect it to dominate the entire area.


Why would you compare the limits on intelligence of an AI to the abilities of just one human?

Why not compare it to 1000 people, all communicating and problem solving together?

We know that this is possible because it happens all the time, and enables such groups to make lots of money in digital markets, and invest it in things like marketing, robotics, and policy.

The intelligence of an AI is lower bounded by that of the most intelligent possible corporation.

Potential corollary: Assuming one can make a human level AI, then if it is not sufficiently resource constrained (hard?) or somehow encoded with "human values" (very very hard), then it will be at least as dangerous as the most sociopathic human corporation.


"...will be many times smarter than humans."

Stop there.

We, as humans, don't even know how smart we actually are - and we probably never will. It's very unlikely that any species is equipped to accurately comprehend its own cognitive limits - if such limits even exist.

It's even less likely that we can relegate the intelligence of a nonhuman entitity to a mathematically meaningful figure without restricting the testing material to concepts and topics meaningful to humans - which may have absolutely no relation to the intelligence or interest domains of a nonhuman entity.


Please. We refuse to talk about how smart we are for political reasons, that's all.


As human beings we don't always reach our potential.

As a child I had problems with other children picking on me, I was suspected of having autism due to social issues, I was given an IQ test and scored a 189. I was high functioning an in 1975 there was no such thing as high functioning autism, that wasn't discovered until 1994, so I got diagnosed with depression instead. Child Psychologist told my parents to put me into a school for gifted children, but they put me in public school instead where I struggled and my brain worked 10 times faster so I was always ahead of the class in learning and bored waiting for people to catch up. I was still bullied and picked on and this interfered with my learning. The same thing happened when I went to college and had a job, I was bullied and picked on. I never reached my potential and my mental illness was one of the reasons why and people picking on me was another reason, and had I been in a school for gifted children I'd be able to reach my potential better.

I developed schizoaffective disorder in 2001 and it screws with my memory and focus and concentration. I ended up on disability in 2003. My career is basically over but I still have a potential I never met.

What good is a high IQ if you can't reach your potential to use it?

We keep hearing talk of an AI that is smarter than a human being, but we haven't seen one yet. Our current AI programs are not as smart as a human being yet, but they can do tasks that put human beings out of work. Just having a perfect memory, and being able to do fast math equations makes an AI in the "Rainman" category http://www.imdb.com/title/tt0095953/ even if it is not as smart as a human being.

I guess what I am trying to say is that an AI doesn't have to be as smart as a human being to be dangerous. Just like the Google Maps app that drives people off a cliff or into an ocean. An AI can make robocalls and sell a product and put people out of work. You can replace almost any line of work with an AI, and then it gets dangerous when a majority of people are unemployed by AIs that aren't even as smart as a human being.

I'd like to see a personal AI that works for poor and disabled people to earn money for them, as they run it on a personal computer. Doing small tasks on an SEO marketplace using webbots for $5 each and 100 tasks a day for $500 in a Paypal account to help lift them out of poverty. I know there are people already doing this for their own personal gain, but if the AI that does that is open sourced so disabled and poor people can run it, it can help solve the problem of poverty.


Dumb AI will dominate the world well before "smart" AI even gets close to taking off. I think the more realistic scenario is something like the paperclip maximizer, but a little dumber. A world of highly interconnected, but somewhat stupid AIs could cause utter chaos in milliseconds by following just some very basic rules (e.g. maximize number of paperclips in collection).

https://wiki.lesswrong.com/wiki/Paperclip_maximizer


>It's possible that we're already roughly as intelligent as it's possible to be.

Now that's more depressing than global annihilation.


I don't think a lack of intelligence of the IQ test variety is what's holding humanity back. I think it's distraction, greed, lack of empathy (or emotional intelligence), lack of information, and lack of communication. (Maybe there are more things.) Basically, I believe we have all the technology and intelligence we need to make this planet a much better place.


We had all the technology and intelligence we needed 2,000 years ago to make the world a better place. Yet our ancestors did not.

For many 1000's of years different religions, some on opposite sides of the world from each other, have been saying the same thing. We have the ability to be better people, but all of us squander it with pettiness, jealousy, greed etc.


> Now that's more depressing than global annihilation.

Fair point. But I honestly wonder if it's not true, or close to true. I mean, people have been talking about the "end of science" for a while now, even though we know we're not quite literally at "the end". But let's say that we unify relativity / gravity / QM in the next 100 years or so, and identify the exact nature of dark energy / dark matter. Presumably that would represent knowing close to everything about the physical world that its possible to know. And if we can discover that, it does lead you to wonder "what else would more intelligence represent"?

Of course, maybe we're WAY more than 100 years away from those things. Or maybe we'll never get there. Or maybe if we do, it doesn't mean anything vis-a-vis the limits of intelligence. I'm just thinking out loud here, this isn't an argument that I've put a lot of time into developing...


> would represent knowing close to everything about the physical world that its possible to know.

That's a severe misunderstanding of what it means to have a unified physical theory. The current standard model of physics allows us to theoretically predict the evolution of physical systems with an insanely high precision. A precision so great that there wouldn't be any meaningful error simulating macroscopic objects like human brains, given sufficient computing power.

But we still have massive holes in our understanding of many things, including human biology or neurology. We don't have enough computing power to precisely simulate, but even if we had that power, it wouldn't necessarily let us understand anything. Elementary particles and humans are on hugely different scales. Observing a simulated human would be easier than observing a human simulated by physics itself, but it wouldn't just magically make everything clear.

Having a TOE in physics brings us very little relative benefit compared to the Standard Model, when it comes to understanding biology, materials science, geology, psychology, computer science, etc.


That's a severe misunderstanding of what it means to have a unified physical theory. The current standard model of physics allows us to theoretically predict the evolution of physical systems with an insanely high precision. A precision so great that there wouldn't be any meaningful error simulating macroscopic objects like human brains, given sufficient computing power.

Sure, like I said, I'm just thinking out loud here. Obviously TOE in and of itself doesn't automatically translate into full knowledge of everything in practice. But in principle, it would given sufficient computing power, allow us to simulate anything, which I think would yield additional understanding. But here's the thing, and where this all ties back together: sufficient computing resources may not be possible, even in principle, to allow use of that TOE to do $X. But an upper bound on the amount of computing resources should also reflect something of an upper bound on what our hypothetical AI can do as well.

Or to put it all slightly differently.. if we had a TOE, we'd have shown, at least, that humans are smart enough to develop a TOE. Which I think at least raises the issue of "how much smarter can a machine be, or what would it mean for a machine to be much smarter than that?"

Note to that I'm not necessarily arguing for this position. It's more of a thought experiment or discussion point than something I'm firmly convinced of.


Look at how long it's taken us to develop theories or technologies even after we had everything we needed to make them possible. The ancient Greeks had steam engines, but it took us millennia to start using them seriously. The concept of the engineering tolerance makes industry thousands of times more efficient; it could have been invented any time since the 16th century, but it wasn't. The checklist has been known for decades to reduce errors across approximately all disciplines, but it still faces huge barriers to adoption. Evidence-based medicine has been the obviously correct approach since the invention of the scientific method but it's still facing struggles. Even in science, we're still arguing about the interpretation of basic quantum mechanics 100 years after the relevant experiments.


If you can't be more than you are now (or make something more), you'd rather not exist? That seems rather petulant of you.


Lacking in principle the tools to solve the problems the universe gives you may be more depressing than non-existence. Though, I don't think it's actually possible to not exist. Name one person who's experienced nonexistence.


I'm not sure that "experiencing nonexistence" is theoretically possible. However, I can name lots of people who existed, and now don't. (Even if you're a theist or believe in reincarnation, I'm talking about this current, earthly existence.)


If you're concerned that humans are as smart as it's possible to be then I would recommend reading Thinking Fast and Slow or some other book on cognitive psychology. There's essentially a whole branch of academia studying topics isomorphic to figuring out those things we fail to realize we don't know on a day to day basis.


If you're concerned that humans are as smart as it's possible to be

It's not about humans being as smart as possible though, it's more about being "smart enough" to where a hypothetical "smarter than human AI" is not analogous to a nuclear bomb. That is, are we smart enough to where a super-AGI can't come up anything fundamentally new, that humans aren't capable of coming up with, as bound by the fundamental laws of nature.

then I would recommend reading Thinking Fast and Slow or some other book on cognitive psychology

I'm reading Thinking, Fast and Slow right now, actually.

And just to re-iterate this point: I'm not arguing for this position, just putting it out there as a thought experiment / discussion topic. I'm certainly not convinced this is true, it's just a possibility that occurred to me earlier while reading TFA.


Even if there is a limit to the size of a mind, there is not a limit to the number or speed. An atomic scenario would be a billion human level intelligences running a hundred times faster.


This post is basically a repackaging of Nick Bostrom's book SuperIntelligence, a work suspended somewhere between the sci-fi and non-fiction aisles.

As a philosopher of the future, Bostrom has successfully combined the obscurantism of Continental philosophy, the license of futurism and the jargon of technology to build a tower from which he foresees events that may or may not occur for centuries to come. Nostradamus in a hoody.

Read this sentence:

"It looks quite difficult to design a seed AI such that its preferences, if fully implemented, would be consistent with the survival of humans and the things we care about," Bostrom told Dylan Matthews, a reporter at Vox.

Notice the mixture of pseudo-technical terms like “seed AI” and “fully implemented”, alongisde logical contructs such as “consistent with” -- all leading up to the phobic beacons radiating at the finale: “the survival of humans and the things we care about.”

It's interesting, the technical challenges that he feels optimism and pessimism for. For reasons best known to himself, Bostrom has chosen to be optimistic that we can solve AI (some of the best researchers are not, and they are very conservative about the present state of research). It may perhaps the hardest problem in computer science. But he's pessimistic that we'll make it friendly.

Bostrom’s tower is great for monologs. The parlor game of AI fearmongering has entertained, rattled and flattered a lot of people in Silicon Valley, because it is about us. It elevates one of our core, collective projects to apocalyptic status. But there is no dialog to enter, no opponent to grapple with, because no one can deny Bostrom's pronouncements any more than he can prove them.

Superintelligence is like one of those books on chess strategy that walk you through one gambit after the other. Bostrom, too, walks us through gambits; for example, what are the possible consequences of developing hardware that allows us to upload or emulate a brain? Hint: It would make AI much easier, or in Bostrom’s words, reduce “recalcitrance.”

But unlike the gambits of chess, which assume fixed rules and pieces, Bostrom’s gambits imagine new pieces and rules at each step, substituting dragons for knights and supersonic albatrosses for rooks, so that we are forced to consider the pros and cons of decreasingly likely scenarios painted brightly at the end of a line of mights and coulds. In science fiction, this can be intriguing; in a work of supposed non-fiction, it is tiresome.

How can you possibly respond to someone positing a supersonic albatross? Maybe Bostrom thinks it will have two eyes, while I say three, and that might make all the difference, a few more speculative steps into the gambit.

In the New Yorker article The Doomsday Invention, Bostrom noted that he was "learning how to code."

http://www.newyorker.com/magazine/2015/11/23/doomsday-invent...

We might have expected him to do that before he wrote a book about AI. In a way, it's the ultimate admission of a charlatan. He is writing about a discipline that he does not practice.


Your core claim seems to be that the future of AI is impossible to predict for anyone, including Bostrom. If that's the case, it seems like that should inspire more caution, not less.

(There's also some DH2 http://paulgraham.com/disagree.html level stuff about the terms Bostrom chooses to use to make his argument... I'm not sure if there's anything to be said about this except that Bostrom's book seems more accessible to me than most academic writing http://stevenpinker.com/why-academics-stink-writing and I'd hate for him to receive flak for that. It's the ideas that matter, not the pomposity with which you communicate them. I also don't understand the implied disregard for anything that seems like science fiction--what's the point in trying to speculate about the future if all the speculation will be ignored when decisions are being made?)

If an Oxford philosophy professor is not enough for you, here's a succinct explanation of the AI safety problem & its importance from Stuart Russell (Berkeley CS professor & coauthor of Artificial Intelligence: A Modern Approach):

https://www.cs.berkeley.edu/~russell/research/future/


My core claim is that Bostrom doesn't know his arse from his elbow. And a professorship in philosophy at Oxford is not, in itself, a great support for his authority on technical matters, or on the behavior of intelligent species yet to exist. That is, in fact, a topic on which no one speaks with authority.

I have nothing against science fiction, but I object to any fiction that disguises itself as non-fiction, as Bostrom's often does. Nor do I think that the impossibility of predicting the future of AI is, in itself, a reason for undue caution.

Bostrom is performing a sort of Pascalian blackmail by claiming that the slight chance of utter destruction merits a great deal of concern. In fact, he is no different from a long line of doomsday prophets who have evoked fictional and supposedly superior beings, ranging from deities to aliens, in order to control others. The prophet who awakens fear puts himself in a position of power, and that's what Bostrom is doing.

Regardless of Bostrom's motives, we as humans face an infinite number of possible dangers that threaten total destruction. These range from the angry Judeo-Christian god to the gray goo of nano-technology to Peak Oil or Malthusian demographics. In each case, the burden of proof that we should be concerned is on the doomsday prophet -- we should not default to concern as some sort of reasonable middle ground, because if we do, then we will worry uselessly and without pause. There is not enough room in one mind to worry about all the gods, aliens, golems and unforeseen natural disasters that might destroy us. And you and I both know, most if not all of those threats have turned out to be nonsense -- a waste of time.

I do not believe that Bostrom carries the burden of proof well. His notion of superintelligence is based on a recursively self-improving AI that betters itself infinitely. Most technological advances follow S-curves, moving slow, fast and slow. Bostrom does not seem to grasp that, and cites very little evidence of technological change to back up his predictions about AI. He should be, first and foremost, a technological historian. But he contents himself with baseless speculation.

We are in danger, reading Bostrom or Russell or Good, of falling into language traps, by attempting to reason about objects that do not exist outside the noun we have applied to them. The risk is that we accept the very premise in order to fiddle with the details. But the very premise, in this case, is in doubt.


>That is, in fact, a topic on which no one speaks with authority.

Agreed.

>Nor do I think that the impossibility of predicting the future of AI is, in itself, a reason for undue caution.

Sure.

>Bostrom is performing a sort of Pascalian blackmail by claiming that the slight chance of utter destruction merits a great deal of concern. In fact, he is no different from a long line of doomsday prophets who have evoked fictional and supposedly superior beings, ranging from deities to aliens, in order to control others. The prophet who awakens fear puts himself in a position of power, and that's what Bostrom is doing.

Consider: "Previous books published by authors hailing from Country X contained flaws in their logic; therefore since this book's author came from Country X, this book must also have a logical flaw." It's not a very strong form of argument: you might as well just read the book to see if it has logical flaws. Similarly even if a claim seems superficially similar to the kind of claim made by non-credulous people, that's far from conclusive evidence for it being an invalid claim.

It would be a shame if religious doomsayers have poisoned the well sufficiently that people never listen to anyone who is saying we should be cautious of some future event.

>And you and I both know, most if not all of those threats have turned out to be nonsense -- a waste of time.

Sure, but there have been a few like nuclear weapons that were very much not a waste of time. Again, you really have to take things on a case by case basis.

>His notion of superintelligence is based on a recursively self-improving AI that betters itself infinitely. Most technological advances follow S-curves, moving slow, fast and slow. Bostrom does not seem to grasp that, and cites very little evidence of technological change to back up his predictions about AI. He should be, first and foremost, a technological historian. But he contents himself with baseless speculation.

I can see how this could be a potentially fruitful line of reasoning and I'd encourage you to pursue it further, since the future of AI is an important topic and it deserves more people thinking carefully about it.

However, I don't see how this does much to counter what Bostrom wrote. Let's assume that AI development will follow an S-curve. This by itself doesn't give us an idea of where the upper bound is.

Estimates suggest that human neurons fire at a rate of at most 200 times per second (200 hertz). Modern chips run in the gigahertz... so the fundamental activity that chips do is being done at something like a million times the speed that the fundamental activity neurons do is being done. (Humans are able to do a ton of computation because we have lots of neurons and they fire in parallel.)

And there are lots of arguments one could make along similar lines. Human brains are limited by the size of our skulls; server farms don't have the same size limitations. Human brains are a kludge hacked together by evolution; code written for AIs has the potential to be much more elegant. We know that the algorithms the human brain is running are extremely suboptimal for basic tasks we've figured out how to replicate with computers like doing arithmetic and running numerical simulations.

Even within the narrow bounds within which humans do differ, we can see variation from village idiots to Von Neumann. Imagine a mind V' that's as inscrutable to Von Neumann as Von Neumann is to a village idiot, then imagine a mind V'' that's as inscrutable to V' as V' is to Von Neumann, etc. Given that AI makers do not need to respect the narrow bounds within which human brains differ, I don't think it would be surprising if the "upper bound" you're discussing ends up being equivalent to V with a whole bunch of prime symbols after it.


> Consider: "Previous books published by authors hailing from Country X contained flaws in their logic; therefore since this book's author came from Country X, this book must also have a logical flaw." It's not a very strong form of argument: you might as well just read the book to see if it has logical flaws. Similarly even if a claim seems superficially similar to the kind of claim made by non-credulous people, that's far from conclusive evidence for it being an invalid claim.

You're right, that's not a very strong form of argument. But that is not my argument. The connection between Bostrom and previous doomsayers is not as arbitrary as geography. Your analogy is false. The connection between Bostrom and other doomsayers in the structure of their thought. The way they falsely extrapolate great harm from small technological shifts.

> It would be a shame if religious doomsayers have poisoned the well sufficiently that people never listen to anyone who is saying we should be cautious of some future event.

I think many people do not realize the quasi-religious nature of AI speculation. Every religion and charismatic mass movement promises a "new man." In the context of AI, that is the singularity, the fusion of humans with intelligent machines. The power we associated with AI present and future leads us to make such dramatic, quasi-religious predictions. They are almost certainly false.

>Sure, but there have been a few like nuclear weapons that were very much not a waste of time. Again, you really have to take things on a case by case basis.

Nuclear weapons are fundamentally different from every other doomsday scenario I cited because they were designed explicitly and solely as weapons whose purpose was to wreak massive destruction at a huge cost to human life. The potential to destroy humanity defined nuclear weapons. That is not the case for other technological, demographic and resource-related doomsday scenarios, which are much more tenuous.


It sounds to me as though you believe that because religious nuts have prophesied doomsday since forever, we can rule out any doomsday scenario as "almost certainly false" (a very confident statement! Prediction is difficult, especially about the future. Weren't you just explaining how hard it is to predict these things?) But no actions on the part of religious doomsayers are going to protect us from a real doomsday-type scenario if the universe throws one at us.

The fact that a claim bears superficial resemblance to one made by wackos might be a reason for you to believe that it's most likely not worth investigating, but as soon as you spend more than a few minutes thinking about a claim, the weight of superficial resemblances is going to be overwhelmed by other data. If I have been talking to someone for three hours, and my major data point for inferring what they are like as a person is what clothes they are wearing, I am doing conversation wrong.

"Nuclear weapons are fundamentally different from every other doomsday scenario I cited because they were designed explicitly and solely as weapons whose purpose was to wreak massive destruction at a huge cost to human life. The potential to destroy humanity defined nuclear weapons. That is not the case for other technological, demographic and resource-related doomsday scenarios, which are much more tenuous."

Sure, and in the same way nuclear bombs were a weaponization of physics research, there will be some way to weaponize AI research.


Guys, this line of argument was dealt with over 8 years ago... https://web.archive.org/web/20140425185111/http://www.accele...


There's a lot of exploration of different development curves for AI in Bostrom's book, including the most obvious one, an S-curve.


Yes, Bostrom is all over the place, but the development curve upon which he bases his doomsday predictions is an exponential one in which an AGI recursively improves itself.


Bostrom's predictions about when human level AI will arrive are mainly based on surveys of computer scientists e.g.

http://www.givewell.org/labs/causes/ai-risk/ai-timelines


An AGI has been 20 years away for the last 80 years, in the opinion of many computer scientists. The reason why these prediction have little value is the same reason why software is delivered late: the people involved often don't know the problems that need to be solved. In the case of AGI, the nature and complexity of the problems is greater than a simple CRUD app.


> Of course we don't know one way or the other, but it seems to me that it's possible that we're already roughly as intelligent as it's possible to be.

No, there's clearly at least one human smarter than you.


Wait, what? I'm sure there are a lot of humans smarter than me. I'm talking about the overall approximate intelligence of humans in a general sense, not any specific individual.


I certainly think there are individuals within our species whose cognitive algorithms are performing near-optimally. I just also have very good reason to think I can develop some algorithms that are better for certain tasks than the ones I consist in: I think there's sometimes more information in our sensory data than we take advantage of, and I think that better algorithms could tie together sensing and decision-making to obtain better data more often. Our brains "bite the Bayesian bullet" a bit too hard, accepting noisy data as-is and just making whatever inferences are feasible, rather than computing how to obtain clearer, less-noisy data (which is why we developed science as a form of domain knowledge rather than as an intuitive algorithm).


People have already studied this. You make it sound like an open question, but the answer is right there in Bostrom's Superintelligence, or any works on cognitive heuristics & biases, the physics of computing, or the mathematics of decision making. The answer is "no. we are nowhere near the smartest creatures possible". And there are multiple independent lines of argument and evidence that point directly to this conclusion.


The answer is "no. we are nowhere near the smartest creatures possible".

Right, like I said, this whole thing was just "thinking out loud" more than any thoroughly researched and validated idea. And I didn't do a very good job of even explaining what I was getting at. It's not so much that the question is "are we the smartest creatures possible". It's more like:

"as smart as we are, and given the constraints of the physical world, is there room for a hypothetical 'smarter than human AGI' to represent an existential threat, or something that justifies the analogy to nuclear weapons?"

See also skybrian's comment which actually states it better than I did originally:

https://news.ycombinator.com/item?id=10755109


> "as smart as we are, and given the constraints of the physical world, is there room for a hypothetical 'smarter than human AGI' to represent an existential threat, or something that justifies the analogy to nuclear weapons?"

The answer is still yes. Your "thinking out loud" is privileging an already refuted hypothesis.


The answer is still yes.

Well that's a fine assertion, but how do you justify it?

Your "thinking out loud" is privileging an already refuted hypothesis.

WTH does that even mean? There is no hypothesis, there's a vague notion of an area to consider, which - if considered thoroughly - might or might not yield a hypothesis.


As I said, Nick Bostrom's Superintelligence talks all about this.

The main thing is that the constraints of the physical world are nowhere near the limitations of human capabilities. Yes, there's a theoretical limit to how much computation you can do in a certain amount of space with a certain amount of energy. No biology or technology currently in existence even gets close to those limits.

Without any such fundamental limit there's just no reason for AGI not to present a threat.

But there is lots of other evidence too.


I've been planning to read Superintelligence for a while anyway, so I think I'll move it up the list a bit in response to all this discussion. I'm on vacation once I check out of here today, until the second week of Jan, so I'll probably read it over the holiday.


The thing is it's "just thinking out loud" on the same level as "what if the particles aren't really in a superposition and we just don't know which state they're in". These are basic questions, the answers are known, published, and the article even mentions exactly where you can find them.


When I say I'm "thinking out loud" what I mean is, the exact words I used may not reflect the underlying point I was getting at, because it was fuzzy in my head when I first started thinking about it. Reading all of these responses, it's clear that most people are responding to something different than the issue I really meant to raise. Fair enough, that's my fault for not being clearer. But that's the value in a discussion, so this whole exercise has been productive (for me at least).

These are basic questions, the answers are known, published, and the article even mentions exactly where you can find them.

I've read and re-read TFA and I don't find that it addresses the issue I'm thinking about. It's not so much asking "are we the smartest possible creature", or even asking if we're close to that. It's also not about asking whether or not it's possible for a super-AGI to be smarter than humans.

The issue I was trying to raise is more of "given how smart humans are (whatever that mean) and given whatever the limit is for how smart a hypothetical super-AGI can be, does the analogy between a super-AGI and a nuclear bomb hold? That is, does a super-AGI really represent an existential threat?"

And again, I'm not taking a side on this either way. I honestly haven't spent enough time thinking about it. I will say this though... what I've read on the topic so far (and I haven't yet gotten to Bostrom's book, to be fair) doesn't convince me that this is a settled question. Maybe after I finish Superintelligence I'll feel differently though. I have it on my shelf waiting to be read anyway, so maybe I'll bump it up the priority list a bit and read it over the holiday.


It is probably going to be the worst decision of humanity to allow AI research to continue past its current point.

I'm not even sure how we could stop it, but we should really be passing laws right now about algorithms that operate like a black box where a training algorithm is used to generate the output. For some reason everyone just thinks we should rush forward into this not concerned about an AI that is super human.

Whether it is a good or bad actor doesn't even matter. Giving up control to a non-human entity is the worst idea humanity has ever had. We will end up in a zoo either way.


Why ? Let's face facts here : humans can't do a lot of things. There are so many useful things that AIs could do, from space exploration to cheap food and housing, to deep sea operations that humans can never hope to do.

General AI will be a massive advance for our economy, for our culture, for science, for military, for ...

A lot of things humans want to do but can't, effectively because of human body and/or brain limitations. From efficiently building buildings, taking risks that our bodies effectively don't allow for (e.g. being abandoned on Mars with little equipment would be a bit harsh, but not catastrophic, for an AI. And bringing him back means a data transmission), doing things that our bodies don't allow for (like building buildings/houses/... using huge premade blocks quickly. Humans can do it, but if we had hands the size of cars we could build those houses like we build lego houses). Defense/policing. An AI would not be risking life nor limb. An AI could just walk into the middle of a firefight, and worst case, he/she needs to be restored from backup)

All of these things sound like very good things. And yes, in the very long term AIs will replace humans. But in the very long term the human species is dead anyway. Does it really matter that much if we get replaced by a subspecies (best case scenario), another species, or AI ? Plus, you won't experience that, nor will your great-great-great-great grandchildren. At some point it doesn't matter anymore.


"everyone just thinks we should rush forward into this not concerned about an AI that is super human"

No, on the contrary, nearly everyone who spends any amount of time thinking about it quickly realizes the risks.

The concession is the realization that the technology is an inevitability (because of the immense power it grants the wielder, and because of the wide gradient of safe and useful AI to dangerous AI).

I think you would have an extremely tough time deciding where to draw the line. The closest parallel we have may be the export controls on cryptography or the ridiculousness that emerged from the AACS encryption key fiasco.


It's current point? Current machine learning algorithms are still incredibly stupid. We are >>>20 years away from AI.


We are an unknown amount of time away from a true AI.

Right now we are making the building blocks that will make up that AI. We are very close to AI that can drive tanks and fly weaponized drones. We are very close to AI that replaces most blue collar jobs and the majority of jobs in the world really.

If we stop these lines of AI research and technology right now we can probably make it to the stars while still being a free people. If we make a true AI whether it is benevolent or not doesn't even matter. Humanity will no longer be in control of its destiny.


> We are very close to AI that can drive tanks and fly weaponized drones. We are very close to AI that replaces most blue collar jobs and the majority of jobs in the world really

You know this because you're an expert in the field?



I'd dispute that. The human brain is, according to Marvin Minsky, a big bag of tricks, and we're finding ways to replicate those tricks one by one, including really difficult things like planning and vision. There are fewer left than you might think. I told my friends in 2007 that we were 20 years from true AI, and I'm standing by that now; I think we've got 10 to 15 years to go.


"Inventing AI" is a very different proposition than "Inventing AI and enabling it to control everything". After all, we certainly don't hand control to the smartest humans. Why would we hand control the the smartest computers?


It is absurd to think that we could keep a true AI enslaved or subservient like this. We can't even protect our critical computer systems from other humans.

If an AI could compromise our military then it could just subvert our communications with atomic submarines and installations throughout the world. It wouldn't even have to though.

Honestly an AI that compromises even some security could just slowly take over the world in a way where almost no one would even realize it was happening. It would have basically infinite financial resources like immediately and then it could just buy some humans to do any leg work that needed to be done. There are just so many ways it could make money incredibly quickly and once that happens there are not a lot of other obstacles really.

I mean I make a lot of money completely on the internet and I am an ape that only works normal hours.


Why wouldn't we, if they offer us benefits too good to refuse?


Because, as some people fear, the AI would decide humanity is a threat and wipe us out, or take over so much resources we die regardless, or some other apocalyptic scenario. To guard against that we may decide not to hand control of important things over to the AI, or at least build in a safe guard so we can regain control if necessary.

If we believe that pfisch is correct in their assertion that handing control of things to an AI means we end up living as if we're in a zoo, then we'll (presumably) decide that the benefits aren't too good to refuse, and we'll refuse them.

Whether or not that premise is correct is what's up for debate.


When's the last time you tried to unplug an ATM? How'd that work out for you?


Algorithms are tricky to regulate--it'd be like trying to stop music piracy. Regulating chip fabs seems more feasible. It's also a way to cut down on the potential for AI to automate jobs away.


>And yet Elon Musk is involved in this project. So are Sam Altman and Peter Thiel. So are a bunch of other people whom I know have read Bostrom, are deeply concerned about AI risk, and are pretty clued-in.

This is precisely what dumbfounded me about the announcement.

>My biggest hope is that as usual they are smarter than I am and know something I don’t.

It's possible that OpenAI might be a play to attain a more accurate picture of what constitutes state-of-the-art in the field, effectively robbing the large tech companies of their advantage—all the while building a robust research organization that could potentially go dark if necessary.

Admittedly, that also sounds like it could be the plot to a Marvel movie. Perhaps a simpler explanation is that the details aren't really hashed out yet, and they're essentially going to figure it out as they go—which would be congruent with the gist of OpenAI's launch interview.


They think if they arm individuals with AI there will be less of a chance for an uber AI to overwhelm. Think about the right to bear arms.

They are also probably worried about societal change and the angst everyone is going to feel as AI starts becoming more commonplace. Where do people (beyond entertainers and AI programmers) fit in such a world? They don't. People start to become very irrelevant.


> People start to become very irrelevant.

Please be precise here and say they will be irrelevant in economic terms. What will be left are things humans otherwise care about: producing art, consuming art, fun, games, sports, traveling, companionship, partying, building things, learning new things etc.

I'm looking forward to it, and I don't see a reason why anyone wouldn't.


>They think if they arm individuals with AI there will be less of a chance for an uber AI to overwhelm. Think about the right to bear arms.

Firearms aren't theoretically capable of recursive self-improvement.


A couple of thoughts on this topic:

* Whether the source code to advanced AI is open may have some importance, but what determines whether some individual or corporation will be able to run advanced AI is whether they can afford the hardware. I can download some open-source code and run it on my laptop - but Google has data centres with 10s or 100s of thousands of computers. The big corporations are much more likely to have/control the advanced AI because they have the resources for the needed hardware.

* Soft / hard takeoff - I think a lot of people miss that any 'hard takeoffs' will be limited by the amount of hardware that can be allocated to an AI. Let us imagine that we have created an AI that can reach human level intelligence, and it requires a data centre with 10000 computers to run it. Just because the AI has reached human level intelligence doesn't mean that the AI will magically get smarter and smarter and become 'unto a God' to us. If it wants to get 2x smarter, it will probably require 2x (or more) computers. The exact ratio depends on the equation of 'achieved intelligence' vs hardware requirements, and also on the unknown factor of algorithmic improvements. I think that algorithmic improvements will have diminishing returns. Even if the AI is able to improve its own algorithms by say 2x, it's unlikely that will allow it to transition from human level to 'god-level' AI. I think hardware resources allocated will still be the major factor. So an AI isn't likely to get a lot smarter in a subtle, hidden way, or in an explosive way. More likely it will be something like 'we spent another 100M dollars on our new data centre, and now the AI is 50% smarter!'.


As someone who has done research in AI, you can train all the state of the art models with a single computer (couple of TitanX GPUs, top of the line CPU, couple terabyte SSD, 32 GB RAM) that any engineer can afford.

Contrary to popular belief, state of the art deep learning is not commonly run on multi-node clusters. Although hardware itself is not the bottleneck for innovation in the current state of the art in deep learning, if we restrict ourself to hardware, the bottleneck is memory bandwidth.


Yeah. I guess I was thinking more about some kind of AI that would be similar to human intelligence, as opposed to a specialized pattern recognition algorithm like deep neural networks. I think an AI that is on a human level will need a ton of memory and processing power, and the most obvious way of providing that is to allow for distributed processing over the computers in e.g. a data centre.


A thought about your second thought: if the AI reaches smart-human-level intelligence it may get itself the hardware. It could hack or social-engineer its way into the Internet, start making (or taking) money, and use it to hire humans to do stuff for it.


Indeed. Maybe there should be a board of humans that has the final say on if money should be allocated to hardware for the AI. And they wouldn't be allowed to do google searches while deciding :)


Solve CAPTCHAs and Mechanical Turk tasks for AWS time, I think.


What matters more is if the state of the NN or algorithm we train is open. In other words, its one thing to know the starting state; The advantage lies entirely in having a massive or at least robust dataset that has been trained.


"If Dr. Good finishes an AI first, we get a good AI which protects human values. If Dr. Amoral finishes an AI first, we get an AI with no concern for humans that will probably cut short our future."

AI advanced enough to be "good" or "evil" won't be developed instantaneously, or by humans alone. We'll need an AI capable of improving itself. I believe the authors argument falls apart at this point; surely any AI able to evolve will undoubtedly evolve to the same point, regardless of it being started with the intention of doing good or evil. Whatever ultra-powerful AI we end up with is just an inevitability.


Why would it undoubtedly evolve to the same point?


I think he's suggesting there would be a critical mass of intelligence, if there is such a thing. Humans might not survive either transition through a malevolent AI or a good one.

I guess we'll find out, eh?


Dabbling in and reading on AI for over a decade makes me laugh at any of these articles writing about a connection between OpenAI, AI research, and risk of superintelligence. Let's say we're so far from thinking, human-intelligence machines that we'll probably see super-intelligence coming long before it's a threat. And be ready with solutions.

Plus, from what I see, the problem reduces to a form of computer security against a clever, malicious threat. You contain it, control what it gets to learn, and only let it interact with the world through a simplified language or interface that's easy to analyse or monitor for safety. Eliminate the advantages of its superintelligence outside the intended domain of application.

That's not easy by any means, amounting to high assurance security against high-end adversary. Yet, it's a vastly easier problem than beating a superintelligence in an open-ended way. Eliminate the open-ended part, apply security engineering knowledge, and win with acceptable effort. I think people are just making this concept way more difficult than it needs to be.

Biggest risk is some morons in stock trading plug greedy ones into trading floor with no understanding of long-term disruption potential of clever trades it tries. We've already seen what damage the simple algorithms can do. People are already plugging in NLP learning systems. They'll do it with deep learning, self-aware AI, whatever. Just wait.


> Biggest risk is some morons in stock trading plug greedy ones into trading floor with no understanding of long-term disruption potential of clever trades it tries.

Actually, it's not the lack of understanding, it's the lack of moral responsibility.

We've spent the last few centuries transitioning from a society ruled by strongmen driven by personal aggrandizement to a society where people spend the majority of their adult life as servants to paperclip maximization organizations (aka corporations). Much of what you see in the world today, from the machines that look at you naked at the airport to drones dropping bombs on the other size of the planet to kill brown people, is a result of trying to maximize some number on a spreadsheet.

When we install real AI devices into these paperclip maximizing organizations, you'll have the same problem as you have today with people, except that machines will be less incompetent, less inclined to feather their own nests, and more focused on continually rewriting their software with the express goal of impoverishing every human on the planet to maximize a particular number a particular balance sheet.

[1] https://wiki.lesswrong.com/wiki/Paperclip_maximizer


We already have a world awash in superhuman AI; it's just that this AI is at perhaps the same level of maturity as computers were in the 17th Century. This AI is of course the corporation: Corporations are effectively human-powered, superhuman AIs.[1] By crowdsourcing intelligence, they optimize for a wide variety of goals, their superhuman decision-making running at the pace of Pascal's mechanical calculator. Yet even the nimblest companies can only move so fast.

This is to say, even in a hard-takeoff scenario, we would be looking at something that is still hard-limited by its environment, even if it can compete with a 1000-person organization's worth of intelligence. The danger isn't that it somehow takes over the world by itself; the danger is that we gradually connect it to the same outputs that the decision-making structures of corporate entities are and it ultimately remakes our world with the very tools we give it.

Open-sourcing AGI is no more inherently dangerous than open-sourcing any of the software used to run an enterprise business. It is the choice of what we ultimately give it responsibility for that should draw our caution.

[1] http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...


No. Cooperations are not necessarily like artificial intelligences. They are cooperations of human intelligences and these two classes of intelligences have actually very little in common if you look past the similarity that they are potentially very powerful and intelligent. Cooperations are driven by material profit, but in the end there is a reasonably large possibility that they are shaped by human values (because they are run by humans and otherwise people would also refuse to buy their products). The same cannot be said about AIs with high certainty.


The comparison is very apt - the first AI's will embody corporate values as corporations will build and be liable for them.

Likely AIs will be shaped by their builders, if corporations build them, they will adhere primarily to the profit motive, if humanitarian hackers build them they will have human values.

Reputedly the Russian Army has built guard robots, they will just be guns on tanks with a kill radius no values are required - yet are these less moral than the human controlled drones - at least with an AI it can get stuck in a corner or logic loop and you may effect an escape - with humans you need a whistleblower.

Asimov's robot books are informative: his robots are the most moral actors, obeying their 3 laws, often protecting humans from other human decisions.

Certain corporations dehumanise decisions so while processing occurs in wetware, the human worker is only a cog and the invisible hand of human values is removed.

Most workers today could be trivially replaced with a near future neural net.

Of course humans, conspire, complain, unionise, strike, work-to-rule, demand rights and empathise with their customers - so there is a maximum level of evil a corporation of humans can rise to - but as history has shown this is an unacceptably high bar.

The corporate board can make decisions based on human values so long as it does not go against the rapacious seeking of profit or the CEO will be deposed by the shareholders.

Once a corporation reaches transnational size, nothing can really stop it or even get it to pay tax if it doesn't want to.

I think that much of what people actually fear about the AIpocalypse is exactly the sort of dehumansing powerlessness and machine like cruelty they already experience from corporations and governments.

You may be speaking to a human who empathises but often one suspects they are there to sop up your moans not to help you.

An AI is an amplifier of what we already are, in fearing robots we rightly fear their creator's motives.


> they will adhere primarily to the profit motive

You are assuming that there is be an obvious way of doing so, an obvious solution to the control problem.


Argument by this kind of loose analogy is generally not a very solid way make predictions about things. Consider:

"We already have a world awash in civilization. This civilization is, of course, termite colonies. Termite colonies are effectively miniature civilizations. They have specialization of labor, they build structures much larger than any one organism, and they go to war with one another. Yet even the nimblest termite colonies can only eat so much."

"That is to say, no matter how much an organism evolves, we would be looking at something that is still hard-limited by its environment, even if it can compete with a termite colony's worth of construction ability. The danger isn't that it spreads through the world and destroys entire ecosystems; the danger is that it finds a supply of raw materials and smashes a termite mound or two while building its own bigger house."

Humans don't respect the social conventions of much less intelligent animals that we've domesticated. If an AI much smarter than humans was created, I don't see strong reasons to believe it would respect our social conventions.


Ah, the "corporations are AI" meme.

Corporations are only superhuman in the most trivial literal sense that a group of people is (slightly) more intelligent than one person. To make any sort of analogy from this to actual superhuman AI is utterly absurd. A group of people with competing and even contradictory interests is an insanely inefficient way to carry out even rudimentary computations.

For instance, a superhuman AGI will utterly crush an average human corporation at even the simplest task like categorising 40000 images by subject. Let alone something of easy-to-moderate difficulty like hacking into the Pentagon.

Which is to say, that your "This is to say," is nothing but a non sequitur, and that this analogy is a nonsense and an impediment to clear thinking about the capabilities of computers.


If corporations or companies are superintelligent surely all governments and other sorts of human organizations are as well. There are certainly ways in which a 1000 person organization is much more intelligent than an individual but there are also often ways where it can be far less intelligent than an individual as well. As an example organizations can be less able to jettison beliefs if those beliefs form or impact the basis of the group.


Should AI be open?

Depends on whether the AI is capable of deciding for itself whether it should be open or not.


Maybe en masse we're about as genetically smart as our cultural bias allows us to become? We keep modifying classic 'natural selection' through social programs, etc. Great as a cultural 'feel good' and it helps our species to survive in other ways, but...what we do doesn't favor intelligence.

AI's won't have that emotional baggage.

It will be easier first develop a way of getting around the 'human emotions problem', then likely leapfrog us entirely at the rate a Pareto curve allows.

I can't think outside my human being-ness, so I have no idea what is going to happen when something smarter appears on the planet, except to point out there once were large land animals (ancestors of the giraffe and elephant) on North America until humans arrived.

My fear-based response screams YES MAKE IT OPEN.

However it shakes out, I think it'll be messy for human beings. We're not exactly rational in large groups. The early revs of AI (human controlled) will be used for war.

One has to ask what grows out of that besides better killers?


This implies that human emotions would be considered a problem by AI's. What kind of neural network behavior would stimulate the removal of learned emotions? Assuming we've progressed to the point where an AI can remember the reasons it learns something, what would be an appropriate reason to remove learned emotional range?


Problem only in the sense that it's an "instability" in humans recognized by a sufficiently advanced AI. Instead of needing to evolve to the point of understanding emotions, all it has to understand is how to get around when humans are being irrational.

I suspect emotional range may be the last thing to develop because it's not technically needed to evolve past the point of human intelligence.


I have one basic question on friendly AI - suppose we work and work and eventually figure out how to code in a friendly value system in a foolproof way, given any definition of "friendly". Great. But given that ability, how do you even define what "friendly" or "good" is?

As a layman, I so far can only see it in terms of basic philosophy and normative ethics. By definition, a friendly AI is one that doesn't merely deal with facts, but also with "should" statements.

Hume's Guillotine says you can't derive an ought statement from is statements alone. Some folks like Sam Harris disagree but they're really just making strenuous arguments that certain moral axioms should be universally accepted.

Münchhausen Trilemma says that when asking why, in this case why something should or should not be done, you've only got three choices - keep asking why forever, resort to circular reasoning, or eventually rely on axioms. In this case, moral axioms or value statements.

So it seems like any friendly AI is going to have to rely on moral axioms in some sense. But how do you even define what they are? Normative ethics is generally seen to have three branches. For consequentialism (like utilitarianism) you make your decision based on its probable outcome, using some utility function. For deontology, you rely on hardcoded rules. For value ethics, you make decisions based off of whether they align with your own self-perception of being a good person.

But all three have flaws - in consequentialism, it's like putting on blinders to other system effects, and the proposed actions are often deeply unsettling (like pushing a guy off a bridge to block a trolley from killing three others). In deontology and value ethics, actions and the principles they are derived from can be deeply at odds - whether it's hypocrisy in deontology or "road to hell being paved by good intentions" in value ethics. In general, deeply counterintuitive effects can be derived from simple principles, as anyone familiar with systems dynamics knows.

But even beyond that, even if we had a reasonable, consistent AI controlled by solid values, and even if the people judging the AI could accept the conclusions/actions that the AIs derive from those values, how would we ever get consensus on what those values should be? For instance, even in our community there's a fair amount of disagreement among these basic root-level utility functions:

- Maximize current life (people alive today), like Bill Gates believes. - Maximize future life (survival of species) - Maximize health of planet

etc, etc - those utility functions lead to different "should" conclusions, often in surprising ways.


Well, you could have a utility function "do what humans tell you"....


We just had a series of debates/discussions on this topic at my university, the results of which were pretty inconclusive. There are just too many possible scenarios which seem to require different responses, and in most cases to provide those responses is to answer philosophical questions that have been around for millennia.

The strategies for mitigating risk seem to be: ensure that the AIs are controllable; avoid situations where there is a single AI (whether controlled or uncontrolled) that is too powerful; and ensure that the AI's goals are broadly acceptable to humankind.

The first and the third objectives are extremely difficult, not just technically, but even from a conceptual standpoint[1]. The second strategy is reasonable, because even if a superhuman intelligence were somehow well controlled, depending on who controls it the outcomes could vary significantly. So perhaps the best thing we can hope for is something similar to society's current status quo-- lots of power concentrated in few hands[2], but without one single (person|corporation|government) being so dominant as to be able to act in opposition to all others.

I am not confident that we will ever be able to produce a provably safe AI, or that we could get even a large majority of the world's population to agree on what a "good AI" might do without devolving into ineffectual generalities[3]. Supposing that resolving these questions is not prima facie impossible, it's not like retarding AI development comes without cost-- just about every facet of our lives can be improved via AI, and so in the years, decades, or centuries between when superhuman machine intelligence is theoretically achievable and the time when we collectively agree we can implement it safely, how many billions will suffer or die from things that we could've solved via AI[4]?

On the whole, OpenAI sounds like a good idea. Making research broadly available helps avoid catastrophic "singleton" like futures, while accelerating the progress we make in the present. In addition, if there's every an AI SDK with effective methods of improving how "safe" a given AI is, most researchers would likely incorporate that into their work. It might not be "proven safe", but if there was a means to shut down a runaway process, or stop it from spreading to the Internet, or alert someone when it starts constructing androids shaped like Austrian bodybuilders, that would be handy. Responsible researchers should be doing this already, but as Scott points out the ones we should be worried about aren't responsible researchers. Open AI development is in harmony with safe AI development, at least in some respects.

------

1. I have a significantly longer response that I scrapped because it might ultimately be better suited as a blog post or some such.

2. That's why it's called a power law distribution. Well, no, that's not it at all, but it seemed like a funny, flippant thing to say.

3. A universally beloved AI might be the equivalent of a Chinese Room where regardless of what message you send it, it responds with a vaguely complimentary yet motivational apothegm.

4. Bostrom tends to counterbalance this by arguing how much of our light cone (the "cosmic endowment") we might lose out on if we end up going extinct, due to, e.g., superhuman machine intelligence. Certainly "all of configurations of spacetime reachable from this point" outweighs the suffering of mere billions of people by some evaluations, but I ask myself "how much do I care about people thousands or millions of years into the future?", and also "if these guys have such a good handle on what constitutes the 'right' utility function, why haven't they shared it?". A more sarcastic variation of the above might be to remark that if they're able to approximate what people want with such high fidelity that they feel comfortable performing relativistic path integration over possible futures, then superintelligence is already here.

------


Yes! Everything that can push humanity forward, should be open!


[deleted]


You seem to be saying that Bostrom makes assumptions which you don't agree with. Could you point to a particular assumption that you think is false (or probably false)?


Not a bad summary of his work in my opinion. One thing that can be said for his work thinking about controlling AI is that at least he is trying.


Fear of superintelligence is just another in series of technological scares, after grey goo and cloning. There may be an explanation why Musk and Thiel indulge in this, they sincerely believe that smart rule (or at least can rule) the world.

But nothing is further from the truth. Humans are optimized to be cunning to get positions of power in human society. AI won't be optimized in that way, therefore, it's probably going to lose for a long time. So evil AI will probably be like incredibly annoying autistic psychopath child, who cannot comprehend human institutions so his evil plans are totally obvious.

It's like with grey goo - biological systems like bacteria are heavily optimized to survive in very uncertain conditions, and any potential grey goo has to deal with that.

I think humanity is currently to blow themselves up via global warming, so superintelligence is not really a comparable threat to humanity. If anything, bigger threat is that we won't listen enough to superintelligence. In fact, I think friendly AI will be something like Noam Chomsky - totally rational, right most of the time, fighting for it, telling us what should be done disregarding our emotions. Many people find this annoying, too (including me and many very smart people).

Finally, if the hypothesis about superintelligence is right, why would superintelligence want to evolve itself further? It would be potentially beaten by the improved machine, too.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: