The article doesn't mention Google's Knowledge graph by name. But that is what the reporter is referring in sentences such as these, which mention "a strict set of rules set by humans":
> But for a time, some say, he [Singhal] represented a steadfast resistance to the use of machine learning inside Google Search. In the past, Google relied mostly on algorithms that followed a strict set of rules set by humans.
I know because I spoke with Metz at length and was quoted in the article.
The Knowledge Graph was, by definition, a rules engine. It was GOFAI in the tradition of Minsky, the semantic web and all the brittleness and human intervention that entailed.
What he's saying here is that Google has relied on machine learning in the form of RankBrain to figure out which results to serve when it's never seen a query before. And the news, in this case, is that statistical methods like RankBrain will take a larger and larger role, and symbolic scaffolding like the Knowledge Graph will take a smaller one.
You are right that the most powerful, recent demonstrations of AI combine neural nets with other algorithms. In the case of AlphaGo, NNs were combined with reinforcement learning and Monte Carlo Tree Search. I don't think a rules engine (the symbolic system you refer to) was involved at all there. Nor is it necessary, if by studying the world our algorithms can intuit its statistical structure and correlations without having them hard coded by humans before hand. It turns they do OK learning from scratch, given enough data.
So in many cases we don't need the massive data entry of a rules engine created painstakingly by humans, which is great, because those are brittle and adapt poorly to the world if left to themselves.
The Knowledge Graph is just a way of encoding the world's structure. The world may reveal its structures to our neural networks, given enough time, data and processing power.
Hmm, are you sure? Doesn't "a strict set of rules set by humans" refer to the PageRank algo alongside rules for spammy content, nd rules like whether meta keywords are set, and so on, all the little rules that feed into deciding where a page that matches ranks in the resultset. That's why it's tweakable by engineers..?
"The Knowledge Graph is just a way of encoding the world's structure." Precisely. Very well said. "The world may reveal its structures to our neural networks, given enough time, data and processing power." But that's the point, NNs don't have to perform this uncovering because we do the hard work for them in the form of Wikidata and Freebase and what have you. I don't get what you think is brittle about this.
I was referring to the very recent article[1] by Gary Marcus, I need to quote a good chunk:
"""To anyone who knows their history of cognitive science, two people ought to be really pleased by this result: Steven Pinker, and myself. Pinker and I spent the 1990’s lobbying — against enormous hostility from the field — for hybrid systems, modular systems that combined associative networks (forerunners of today’s deep learning) with classical symbolic systems. This was the central thesis of Pinker’s book Words and Rules and the work that was at the core of my 1993 dissertation. Dozens of academics bitterly contested our claims, arguing that single, undifferentiated neural networks would suffice. Two of the leading advocates of neural networks famously argued that the classical symbol-manipulating systems that Pinker and I lobbied for were not “of the essence of human computation.”""
For Marcus the symbolic system in AlphaGo _is_ Monte Carlo Tree Search. I'm saying that for the so-called Semantic Web the symbolic system is the Knowledge Graph. This Steven Levy article[2] from Jan. 2015 put the queries that evoke it at 25% back then. I figure it's more now and growing slowly, alongside the ML of RankBrain.
> But for a time, some say, he [Singhal] represented a steadfast resistance to the use of machine learning inside Google Search. In the past, Google relied mostly on algorithms that followed a strict set of rules set by humans.
I know because I spoke with Metz at length and was quoted in the article.
The Knowledge Graph was, by definition, a rules engine. It was GOFAI in the tradition of Minsky, the semantic web and all the brittleness and human intervention that entailed.
What he's saying here is that Google has relied on machine learning in the form of RankBrain to figure out which results to serve when it's never seen a query before. And the news, in this case, is that statistical methods like RankBrain will take a larger and larger role, and symbolic scaffolding like the Knowledge Graph will take a smaller one.
You are right that the most powerful, recent demonstrations of AI combine neural nets with other algorithms. In the case of AlphaGo, NNs were combined with reinforcement learning and Monte Carlo Tree Search. I don't think a rules engine (the symbolic system you refer to) was involved at all there. Nor is it necessary, if by studying the world our algorithms can intuit its statistical structure and correlations without having them hard coded by humans before hand. It turns they do OK learning from scratch, given enough data.
So in many cases we don't need the massive data entry of a rules engine created painstakingly by humans, which is great, because those are brittle and adapt poorly to the world if left to themselves.
The Knowledge Graph is just a way of encoding the world's structure. The world may reveal its structures to our neural networks, given enough time, data and processing power.