I can't be the only one that considers all these AI articles just smoke and mirror puff pieces to prop up a company's value by capitalizing on the hype (hysteria?), can I? I think the first flag is that the journalists don't seem to really understand the technical capabilities or limitations of current ML/AI applications. They accept grandiose claims at face value because there is no way to measure the real potential of AI (which the promise of seems limitless, so anything appears plausible especially coming from a big company like GOOG).
I think there are a couple of really overhyped areas right now, AR/AI/ML and IoT/IoE. Now while I don't mind the attention and money being thrown at tech, I can't help but feel we're borrowing more against promises, hopes, and dreams, while simultaneously under-delivering and I think that's going to hurt tech's image and erode investor confidence sooner than later.
A lot of things that were impractical 5-10 years ago are now moving out of the domain of the biggest companies to the smaller ones.
Applications involving computer vision and speech recognition are now buildable by small companies, which will hopefully yield a proliferation of really interesting novel applications.
Sure, we don't have terminator-style AI, but honestly people using the term AI need to shut up, a lot. There is no AI, these days, just massively creaky giant ML systems with a host of ph.d's being thrown onto the fire to keep them running. But the ML applications are super cool.
The Chinese curse might be now updated to "may you wind-up dependent on 'really interesting novel applications'..."
Machine learning-derived applications are impressive and give a good show until one winds up in a situation where they are expected to work reliably. Sure, it's nice that the insurance company's phone-based, voice-recognition-driven, registration/etc system can understand 99% of the choices people give them - except total failure in that 1% is actually going to leave a large population unserved and angry. Of course the company has keypad backup - except they don't 'cause that would cost the money they claimed voice recognition could save, etc.
Machine learning apps are great for situations where
1) You don't expect 100% reliably and the degree of non-reliably doesn't have to even be quantified.
2) Either you are accept that they'll degrade over time and have an army of tuners and massive data collect to keep that from happening OR you are dealing with an environment you completely control.
This is kind of the conditions for regular automation - except even more so.
We lost the word "AI" literally decades ago. Everything from search algorithms to machine learning to genetic algorithms to video game bots are called AI. The cool kids use the word "AGI" now.
There is a common expression that "AI is whatever computers can't do yet." At one time computers couldn't play board games or do vision, so those things were called AI.
It seems like your first and second paragraphs express opposite opinions. In the former you seem dismayed by the over-application/overhyping of the term "AI"; in the latter you seem frustrated by the high and ever-increasing bar for categorizing systems as "AI". Am I misinterpreting you?
My comment can be read two ways, and neither way is wrong. I wasn't really expressing an opinion as much as bringing up relevant facts. People label things that we don't know how to do yet "AI". And then when these hard problems are solved, they seem like they are not really "intelligent".
This leads to both overuse and trivialization of the word, but also moving goal posts for the field. And actual progress isn't taken seriously, because nothing feels like intelligence when you understand it.
Yeah, if anything, the "AI" part of the search has been part of the decline. Google aggressively gives me what it thinks I want rather than what I ask for. It seems like it's very clever in giving me something like what an entirely average person would likely want if they mistakenly typed the text that I knowingly and intentionally typed ("Kolmogorov? do you mean Kardashian?" etc).
The search does seem able to understand simple sentences but there's much less utility in that than one might imagine. Just consider that even an intelligent human who somehow had all of the web in their brain couldn't unambiguously answer X simple sentence from Y person whose location, background and desires were unknown to them. Serious search, like computer programming, actually benefits from a tool that does what you say, not what it thinks you mean. Which altogether means they're a bit behind what Alta Vista could give in the Nineties but are easier to use, maybe.
Part of the situation is the web itself has become more spam and SEO ridden and Google needs their AI just to keep up with the arms race here. So "Two cheers" or something, for AI.
They seem a bit better at dealing with search spam than they used be. I was searching for a something in ho chi minh the other day and got a spam site saying "where to get <thing> in ho chi minh" where they'd produced similar pages automatically for all cities in the world pointing to their stupid online site, and kind of got nostalgic - I hadn't seen that kind of spam for a year or two and it used to be constant.
Sometimes I get the vibe that it's a weird self-fulfilling prophesy of terrible SERPs. You reward freshness too much, so then people play that game. But you also punish duplicate content at the same time so the best content, if it already exists but is not fresh, naturally has to fall off.
That is pure nostalgia talking. I strongly suspect that if you did you would be appalled and want to switch back after only a few searches. Search in 2016 is leaps and bounds beyond what it was in 2006.
If the internet was 1 page 10years ago today it is 10 million pages. So today Google search is a lot better. Its not that Google search has detiorated its the web that has to much useless data.
I still am able to find what I am looking for in the first 5 results for most things. 10 years ago had to go to the 2nd or 3rd page for some obscure queries. Now it's a lot better at answering direct questions as well instead of just giving results.
I would argue that you need to evolve with the software. Google search isn't the same anymore. Intellectual Darwinism, Evolve or become irrelevant. You can't search the way you used to or you'll get crappy results. If you don't want to be limited by your Google search bubble then search in anon mode.
> I think there are a couple of really overhyped areas right now, AR/AI/ML and IoT/IoE. Now while I don't mind the attention and money being thrown at tech, I can't help but feel we're borrowing more against promises, hopes, and dreams, while simultaneously under-delivering and I think that's going to hurt tech's image and erode investor confidence sooner than later.
I became interested in natural language processing in the early 2000s, more as a hobby and as part of my personal projects, but even so, I remember that back then most of the AI-related discussions on things like forums and mailing lists were mentioning the AI winter as the big bad wolf that had killed an entire industry. It also killed LISP, they were saying.
Interesting to see that that memory seems to have faded away to the distant past.
I thought the same thing when I saw the title. And then I saw the byline: Cade Metz. And I read the story. Metz understands tech. Metz got this story right.
So while you may be right that many AI articles are smoke-and-mirrors from journalists who don't get the tech, I think you picked the wrong article to make that point about.
We've had artificial intelligence since you could play a computer at chess, but the expectation has always been a mechanized Arnold Schwarzenegger or Hal 9000.
The difference today is the scale at which it operates in our daily lives, and the accelerated rate at which it is growing.
You can add to this, the gigantic misuse of the term "AI" when journalists really want to say "robotics". I don't mind clever shortcuts when you need to explain something abstract or invisible to non-technical people, but, at some point, somebody will have to tell them they're two completely different fields.
>>I can't be the only one that considers all these AI articles just smoke and mirror puff pieces to prop up the companies value by capitalizing on the hype (hysteria?)
You're not.
So far I have not seen/heard anything remotely resembling AI. Neural nets are just weighted graphs.
Is anyone arguing that machine learning is not AI? The gist of the article is that Google is leaning toward ML/DL and away from the rules engines/Knowledge Graph. The headline is a shorthand, which, although it could be more precise, is not inaccurate.
Interesting that you feel that. The article mentions nothing about Google's Knowledge Graph. I don't have any privileged insight into Google, just the same surface data as the rest of you all - but I would say that, if anything, Google's Knowledge Graph can fit with _both_ a rules engine strategy and a machine learning one.
How is Google going to "organise the world's information" unless it has a model of how all the facts in the world line up? That model is the Knowledge Graph. How does Google intend to map queries that it has never seen before to pages in its vast index? With the help of the Knowledge Graph and natural language processing and machine learning.
I'm going to try to articulate something here that I've not fully worked out but that I'm sort of intuiting so cut me some slack for the next paragraph :)
People keep banging on about machine learning and the impact that it is having. This is undeniable. But we can see even from AlphaGo that a hybrid approach that combines artificial neural nets with some sort of symbolic system outperforms neural nets on their own. For AlphGo that symbolic system is tied to the mechanics of the game of Go. For internet search that symbolic system is a generalised knowledge graph.
Do you get what I mean? I'd love to hear what others think …
The article doesn't mention Google's Knowledge graph by name. But that is what the reporter is referring in sentences such as these, which mention "a strict set of rules set by humans":
> But for a time, some say, he [Singhal] represented a steadfast resistance to the use of machine learning inside Google Search. In the past, Google relied mostly on algorithms that followed a strict set of rules set by humans.
I know because I spoke with Metz at length and was quoted in the article.
The Knowledge Graph was, by definition, a rules engine. It was GOFAI in the tradition of Minsky, the semantic web and all the brittleness and human intervention that entailed.
What he's saying here is that Google has relied on machine learning in the form of RankBrain to figure out which results to serve when it's never seen a query before. And the news, in this case, is that statistical methods like RankBrain will take a larger and larger role, and symbolic scaffolding like the Knowledge Graph will take a smaller one.
You are right that the most powerful, recent demonstrations of AI combine neural nets with other algorithms. In the case of AlphaGo, NNs were combined with reinforcement learning and Monte Carlo Tree Search. I don't think a rules engine (the symbolic system you refer to) was involved at all there. Nor is it necessary, if by studying the world our algorithms can intuit its statistical structure and correlations without having them hard coded by humans before hand. It turns they do OK learning from scratch, given enough data.
So in many cases we don't need the massive data entry of a rules engine created painstakingly by humans, which is great, because those are brittle and adapt poorly to the world if left to themselves.
The Knowledge Graph is just a way of encoding the world's structure. The world may reveal its structures to our neural networks, given enough time, data and processing power.
Hmm, are you sure? Doesn't "a strict set of rules set by humans" refer to the PageRank algo alongside rules for spammy content, nd rules like whether meta keywords are set, and so on, all the little rules that feed into deciding where a page that matches ranks in the resultset. That's why it's tweakable by engineers..?
"The Knowledge Graph is just a way of encoding the world's structure." Precisely. Very well said. "The world may reveal its structures to our neural networks, given enough time, data and processing power." But that's the point, NNs don't have to perform this uncovering because we do the hard work for them in the form of Wikidata and Freebase and what have you. I don't get what you think is brittle about this.
I was referring to the very recent article[1] by Gary Marcus, I need to quote a good chunk:
"""To anyone who knows their history of cognitive science, two people ought to be really pleased by this result: Steven Pinker, and myself. Pinker and I spent the 1990’s lobbying — against enormous hostility from the field — for hybrid systems, modular systems that combined associative networks (forerunners of today’s deep learning) with classical symbolic systems. This was the central thesis of Pinker’s book Words and Rules and the work that was at the core of my 1993 dissertation. Dozens of academics bitterly contested our claims, arguing that single, undifferentiated neural networks would suffice. Two of the leading advocates of neural networks famously argued that the classical symbol-manipulating systems that Pinker and I lobbied for were not “of the essence of human computation.”""
For Marcus the symbolic system in AlphaGo _is_ Monte Carlo Tree Search. I'm saying that for the so-called Semantic Web the symbolic system is the Knowledge Graph. This Steven Levy article[2] from Jan. 2015 put the queries that evoke it at 25% back then. I figure it's more now and growing slowly, alongside the ML of RankBrain.
yep, this apparently has been a wall street operation than anything else. Google needs the capital to transform and sustain the decline of web search revenue.
I think there are a couple of really overhyped areas right now, AR/AI/ML and IoT/IoE. Now while I don't mind the attention and money being thrown at tech, I can't help but feel we're borrowing more against promises, hopes, and dreams, while simultaneously under-delivering and I think that's going to hurt tech's image and erode investor confidence sooner than later.