Hacker Newsnew | past | comments | ask | show | jobs | submit | zep15's commentslogin

Reasoning means things like this: suppose you are holding a ball and want to make it drop. A rule of inference tells you that if you release it, it will drop. You can then reason out that you should release it.

Sure, the rule of inference may have ultimately been derived from experience, by a process which in some sense involved statistical correlation. But you have to distinguish that ultimate basis for the inference rule from _the process of logical inference itself_. It's the latter that is generally called reasoning.

Reasoning in the above sense is essential to intelligence, even at the toddler level, and the DeepMind work doesn't address reasoning. I think that may be the point the parent was getting at.


Right, inference is essential to any attempt to build an intelligence and DeepMind in particular doesn't do inference (AFAIK). I just wanted to clarify what the parent commenter was saying; I work in the field, and in conversations around AI, I very often hear "reasoning" used as an ill-defined, unattainable (for machines) _je ne sais quoi_ that's used to prognosticate about the potential for AI in general. Being specific about what one means by "reasoning" (in this context, inference) is useful for removing those kinds of useless, unmoored-from-logic[1] perspectives from a conversation.

[1] To be clear, I'm not dismissing a viewpoint that I think is wrong as "unmoored from logic", I'm specifically talking about the very common situation where people confidently assert this with no attempt (and no ability) to back it up in any way other than confidence that intelligence is simply natural and non-biological entities can never get arbitrarily close.


Producing a simulated toddler is way beyond our current capabilities, no matter how much simulated experience we give it. We simply don't know (yet) how to program said toddler's brain.


Evolution didn't know, too. But it happened. And we easily find criteria saying that thing doesn't behave like a toddler. It would be a huge step forward to see a list of positive criteria.


> Evolution didn't know, too. But it happened.

Whilst technically correct, unguided evolution doesn't necessarily help us.

We know that intelligence can be achieved by one brain's worth of matter, suitably arranged, in a few years. In fact, with an extra 9 months and a suitable environment, we can do the same with a single fertilised egg. Yet reproducing these feats artificially is well beyond our current abilities.

On the other hand, evolution required a whole planet and billions of years before it stumbled on intelligence; many orders of mangnitude more effort than the above.


> evolution required a whole planet and billions of years before it stumbled on intelligence

Everybody seem to agree humans are intelligent and stones not. You suggest at some point in time intelligence appeared out of nothing. Can you nail that point?

One possible definition is: To act adequately in an environment requires intelligence. That rules out all non-living things because they don't act, but includes plants and even protozoa. Actually all livings things are intelligent by this definition and then intelligence emerged ~4 billion years ago on this planet. If you were to attribute intelligence exclusively to humans it happened some million years ago.

How might a piece of software act adequately? By above reasoning it has to resist to termination. But that would mean the "Do you really want to exit XYZ" dialog boxes are first signs of artificial intelligence. Yes, I'm laughing too. But I think, when software starts to trick users and admin into not shutting them down, some threshold has been crossed.


> Everybody seem to agree humans are intelligent and stones not. You suggest at some point in time intelligence appeared out of nothing. Can you nail that point?

I suggest no such thing. It is a scale. I deliberately avoided the phrase "human-level intelligence", but any definition of AGI would do.

Even so, if you want to count all life as "a little intelligent" then it still took a billion years of planet-wide chemistry to stumble upon it (ignoring the Earth's cooling). Still far more effort than fertilising a human egg.

> One possible definition is: To act adequately in an environment requires intelligence.

This is no less ambiguous, since you've not defined "adequate".

> How might a piece of software act adequately? By above reasoning it has to resist to termination.

That does not follow. "Termination" is the mechanism of natural selection, so all systems undergoing natural selection will biased to resist it (otherwise they'd be out-competed by those who do). If we use some unguided analogue of natural selection to create intelligent software, then there would certainly be such a bias.

However, my point is that unguided evolution is not the right way to create/increase intelligence. As soon as we try to influence the software's creation in any way, either through artificial selection criteria or by hand-coding it from scratch, we introduce new biases which may be far more powerful than the implicit "avoid termination" bias.

> But I think, when software starts to trick users and admin into not shutting them down, some threshold has been crossed.

That's called malware... ;)


Try this: Surviving in an environment requires intelligence.

... or propose another. I tend to avoid all social, psychological definitions to end up with something measurable along the lines of Schrödinger's "What is Life?".

> That's called malware... ;)

I'm sure, stones think same about amoeba.


Exactly. I'd happy to see ant level AI first.


Do many "otherwise smart" people actually believe "superhuman machine intelligence is prima facie ridiculous"? I'd like to see some citations :-). I think smart people tend to have much more nuanced views.


>Do many "otherwise smart" people actually believe "superhuman machine intelligence is prima facie ridiculous"?

I don't know how "otherwise smart" I am, but I wonder how we would be able to tell that a machine intelligence was "superhuman" as opposed to "buggy".

For example, suppose we build a super-AI and ask it, "Is Shinichi Mochizuki's proof of the ABC conjecture correct" [1]. What would we do if it said "yes"?

(Of course, if "superhuman" just means "able to do things humans already know how to do and verify, but lots faster", then we're already there).

[1] http://www.newscientist.com/article/dn26753-mathematicians-a...


We'd ask it to produce a simplified version.


>We'd ask it to produce a simplified version.

Yeah, that would work :)

Maybe the question I should have asked, is:

What if we ask a super-AI for a proof of the ABC conjecture, and the result is something too complicated for humans to verify?

My point, if I have one, is that when I read about "superhuman machine intelligence", sometime people seem to mean "capable of knowledge that humans couldn't figure out on their own but that humans can understand once they see it"; and sometimes they seem to mean "capable of knowledge that is beyond human capacity to even verify".

I think development machine intelligence of the first kind is extremely likely, but I'm more skeptical about the second kind.


See the reaction of tech industry after Musk donated 10M USD to AI research. It sort of divided into two groups, one saying that it's great choice and another claiming that he's an idiot and AI is a hoax (for the record, I'm in the former group).


Sam's last post on Machine Intelligence, and the worries regarding it, received a lot of dismissal here on HN from people who thought that the idea is completely unfounded and implausible.


I am, by most measures, pretty smart, and I agree with Dijkstra that the question of whether a computer can think is as interesting as whether a submarine can swim.

The Strong AI hypothesis assumes a mechanistic universe, if not necessarily a materialistic one, and I think that condition is false.


I find it odd that AI risk has become such a hot topic lately. For one, people are getting concerned about SMI at a time when research toward it is totally stalled---and I say that as someone who believes SMI is possible. Stuff like deep learning, as impressive as the demos are, is not an answer to how to get SMI, and I think ML experts would be the first to admit that!

On top of that, nothing about the AI risk dialogue is new. Here's John McCarthy [1] writing in 1969:

> [Creating strong AI by simulating evolution] would seem to be a dangerous procedure, for a program that was intelligent in a way its designer did not understand might get out of control.

Here's someone thinking about AI risk 46 years ago! The ideas put forward recently by Sam Altman and others are ideas that have occurred to many smart people many times, and they haven't really gone anywhere (e.g., at no point between 1969 and now has regulation been enacted). I wish people would ask themselves why that is before making so much noise about the topic. The only people influenced by that noise are laypeople, and the message they're getting is "AI research = reckless", which is a very counterproductive message to be sending.

[1] McCarthy, John, and Patrick Hayes. Some philosophical problems from the standpoint of artificial intelligence. USA: Stanford University, 1968.


I think there's some truth to what you say about Chomskyan linguistics being dogmatic. But linguists aren't really playing the same game as the people in (e.g.) statistical machine translation, so it's hard to say the latter are "winning". Chomskyan linguistics is also just one of the various competing traditions in linguistics.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: