Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The premise is wrong.

"We’ve been told the artificial intelligence (AI) revolution is right around the corner. But what if it isn’t?"

Google Photos already picks out objects in photos to make photos searchable by keyword, removes obstructions (eg. fences) from photos. Tesla cars already automatically follow the speed limit, automatically brake, automate lane changes, etc.

A few years ago this article might've had a valid point, but not anymore.



And yet we don't have have an AI which could tell us if a photo is a bird and win at checkers (or tic-tac-toe). Our current AI is stuck in widget-land and it might solve some small, interesting tasks but we don't know how to even approach the harder stuff.

It reminds me of that problem where anytime we do something new in AI, it is quickly defined as not AI. I think that is totally correct because what we're doing isn't actual AI! We're going to enter another AI winter once lay people begin to realize the limitations of the current state-of-the-art. My prediction is that this will happen once progress on the self-driving front stalls.


There is a limit to the current method. The generalised AI you described likely is quite a while away. It's also economically questionable, now that many of the specialised AIs can be done very affordably.

For example, if you want to teach a robot to flip burgers, you could invest a few trillion dollars into generalised AI and wait many years for an uncertain outcome, or you could create training sets for a neutral network and be done in a few months for a few million dollars. The reason we know we have made advancement is that until a few years ago, the problem of training that robot did not seem to be within reach. Today it mightn't be completely easy, but most people would agree that it's quite achievable.


> And yet we don't have have an AI which could tell us if a photo is a bird and win at checkers (or tic-tac-toe).

Google can. I did search by image and it identified my picture as a dog. And if you search for "tic-tac-toe" it has a built in game which has difficulty settings up to Impossible (which presumably plays perfectly).


Presumably those are separate systems. I'm talking about multi-task learning.


The point is that we didn't have this in products 5 - 10 years ago. Advancement is being made. It might have a limit, but no one is claiming that we have 100% solved AI; just that advancements have been made and products are not yet taking advantage of all the new possibilities.


Once people come up with a way to have a computer think abstractly it's just a matter of linking together a bunch of different subsystems. Your brain works the same way (try getting your visual cortex to come up with your next tic-tac-toe move).


These things are not "intelligence" though. they are gimmicks that can only be applied to a very limited range of problems.

These advances are advances but they do not prove that general ai is possible.


Maybe this "not intelligence" stuff is way more useful than strong ai.

And sure, each solution tackles a limited range of problem, but a lot of problems can be solved or made easier by this approach. The results are already above and beyond anything ever achieved by strong AI research.

Edit: Sorry, only now I realize the point of your comment. Yes, there is no indication (in my opinion) that current achievements will lead us to strong AI


> they do not prove that general ai is possible

Is there anyone who seriously believes that AGI isn't possible? I thought the only argument was about the timeline. To me it's absolutely inevitable, even if it may not be in my lifetime. I mean if we really can't do it with computers, then we'll just genetically engineer a giant brain in a vat or something. Still artificial! But unless you have some convincing evidence that human intellect is at the very limit of some speed-of-light-like universal constant, you bet we'll build something better. Eventually.

All of these articles basically make me think of a newspaper in 1915: "What if human flight is a failed dream?" Just wait, buddy. Rome wasn't built in a day, or even a decade.


Doubter here..

a) For AGI - I personally think there is an intellectual limitation in the human to go to this point. Case in point: Dogs can't talk. Arguing the 'inevitability' is to some extent arguing that one day dogs will be able to talk, because why shouldn't they? Will we make 'general AI' close enough to fool many people many times, which is essentially composed of a bazillion cogs wired together and rigged to appear general? probably. But to the level that it meta-tunes itself? no.

b) For your 'engineered brain': If you are mimicking natural intelligence with chemistry, biology, etc, it is a clone - you are still dependant on understanding natural processes which you didn't create, so it is only a clone and not at all 'artificial'.

If you'll notice, the philosophical limitations of 'b' are somewhat the same as 'a' - e.g. copying processes that already exist manually is not the same as creating it from scratch..


> arguing that one day dogs will be able to talk, because why shouldn't they?

You misunderstand how evolution works. Humans are the product of over 10 million years of brutal selection pressure in favour of intellect. Before that, we indeed had about the same vocabulary as dogs (you should really say wolves, by the way - dogs are our invention).

If wolves/dogs were subject to the same pressure, then there's no reason why they would not eventually adapt in the same way humans did. However, humans got there first, and have so thoroughly colonised the earth that there is no chance of this happening.

> But to the level that it meta-tunes itself?

Well, you're extending AGI here into some kind of singularity runaway intelligence explosion. That's not within the scope of my argument. I have no opinion on that.

> it is only a clone and not at all 'artificial'

In my view, anything that is not naturally occurring is artificial.


There is good reason to think that ML style techniques won't work.

The problem is that computers have long since surpassed human computational capabilities by orders of magnitude but the increased computational abilities don't make them much more intelligent.

For a computer to identify objects in a picture, with sub-human accuracy, it must be trained on a dataset of billions of photos and get massive amounts of human feedback to tune its algorithms.

The human mind can perform the same task with a sample data set many orders of magnitude smaller.

This would tend to indicate that the mechanism being used by ML is not the same mechanism as the brain uses or is vast inferior by many orders of magnitude.

The further computational power increases without producing intelligence the less likely it is that raw computation can produce intelligence.


It can be done in principle (proof: our existence). But there is no proof yet that we ("human intellect") are capable (smart enough) to get it done. There might be a hard barrier to our intellectual capabilities we can't see or didn't hit yet.


> There might be a hard barrier to our intellectual capabilities we can't see

I guess? But there's no evidence for that, and you're not even presenting a theory.


They remind me more of cold fusion.


Then you're pretty damn confused. The two are not even remotely comparable.


I think they're much more similar than AI and flight. We've been promised AI for decades and it's always just around the corner. Yet even today we don't have even simple worm level AGI. We have game-players and cool-picture-makers and maybe some categorization and function-approximators which are useful to business tasks but in general it's a bunch of playthings and fluff.


I really think you should examine your way of thinking. I'm sure charlatans have been making and breaking promises about pretty everything you can think of since the beginning of time. It's meaningless and is no basis for estimation.

I am talking about raw scientific possibility, and a good rule of thumb is: if you see it in nature, then it is assuredly possible, and humans will eventually do it a thousand times better.

Cold fusion: Does not exist in nature. Only exists in (controversial) theory. I'll believe it when I see it.

Hot fusion: Exists in nature so it's possible. After developing the technology, humans will be able to do it better. Will take a while.

Flight: Exists in nature, so it's possible. After developing the technology, humans can do it better. Took a while.

Intelligence: Exists in nature (ie you and me). After developing the technology, humans will be able to do it better. Will take a while.

See the pattern?


> humans will eventually do it a thousand times better.

From an energy conservation perspective nature is perfect. It is impossible for humans to do better than nature when it comes to conserving energy, which is a pretty key problem, so it seems a little optimistic to think that humans can do 1000X better than nature at very many things (or any?).


> From an energy conservation perspective nature is perfect

I don't understand. Energy is always conserved no matter if it's nature or humans - literally the first law of thermodynamics. Nature, humans, aliens, anything else is all "perfect" so I don't understand your point, or its relevance


My point is that in this key area it is impossible to do better than nature since nature is already perfect.

I am suggesting that the assumption that humans are generally able to do 1000X better than nature may be flawed if nature can already do the most important things so well.


I don't mean to be rude but that's not a coherent argument. I don't understand what you're trying to say. Nature is certainly not "perfect" in any interpretation.

I don't doubt that you have well-intentioned beliefs, but before stating them again you need to withdraw and figure out how you can state them in an understandable way. When you've done that, come back and we can talk. I am saying this with the best of intentions btw.


I might accept that difference with the understanding that if AI is analagous to flight, we're currently at 6th century Chinese kites and not 10 years from Kitty Hawk (which seems to be the perception when the flight analogy is made).


Then we are in complete agreement. I am only commenting on the physical possibility. It could very well take a thousand years!


This argument has been made at every step in the evolution of AI.

For a long time, AI would "arrive" if it could beat a human in chess. Then it did and suddenly that became a gimmick, a trick of computation.

30 or 20 years ago, picture (if you can) how we would have viewed this technology: someone verbally telling a pocket-sized device to make an appointment and order groceries, asking it for facts or directions, etc. It would have seemed more like magic than feasible AI.

Today it's a "gimmick". The bar for AI always rises to beyond whatever we're currently comfortable with, and the bar for "strong AI" doubly so.


It's probably because at every step, we always think that only a strong AI could solve the puzzle. So, when someone manages to make a not-strong-AI solve it, we say they worked around the implicit rules and created a 'hack'.


Paradoxically this argument is made every time someone refutes the idea that AI is making progress.


> Google Photos already picks out objects in photos to make photos searchable by keyword, removes obstructions (eg. fences) from photos. Tesla cars already automatically follow the speed limit, automatically brake, automate lane changes, etc.

> A few years ago this article might've had a valid point, but not anymore.

Not that I entirely disagree but we had the stuff above a few years and arguably several years back already.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: