Robin's post reveals a couple fundamental misunderstandings. While he may be correct that, for now, many small firms should apply linear regression rather than deep learning to their limited datasets, he is wrong in his prediction of an AI bust. If it happens, it will not be for the reasons he cites.
He is skeptical that deep learning and other forms of advanced AI 1) will be applicable to smaller and smaller datasets, and that 2) they will become easier to use.
And yet some great research is being done that will prove him wrong on his first point.
One-shot learning, or learning from a few examples, is a field where we're making rapid progress, which means that in the near future, we'll obtain much higher accuracy on smaller datasets. So the immense performance gains we've seen by applying deep learning to big data will someday extend to smaller data as well.
Secondly, Robin is skeptical that deep learning will be a tool most firms can adopt, given the lack of specialists. For now, that talent is scarce and salaries are high. But this is a problem that job markets know how to fix. The data science academies popping up in San Francisco exist for a reason: to satisfy that demand.
And to go one step further, the history of technology suggests that we find ways to wrap powerful technology in usable packages for less technical people. AI is going to be just one component that fits into a larger data stack, infusing products invisibly until we don't even think about it.
And fwiw, his phrase "deep machine learning" isn't a thing. Nobody says that, because it's redundant. All deep learning is a subset of machine learning.
I'm skeptical of claims about a one-shot learning silver bullet, unless people are talking about something different from how it has been classically presented, .e.g. Patrick Winton's MIT lectures. Yes, you can learn from a few examples, but only because you've imparted your expert knowledge, maintain a large number of heuristics, control the search space effectively, etc. There's a lot of domain-specific work required for each system, so I consider it more an approach of classical AI and not something that figures out everything from the data alone, like deep learning.
But again, maybe people are talking about something different than my above description when they talk about one-shot learning today. Either way, I don't think having to rely on a lot of domain specific knowledge is necessarily a bad thing.
Disclaimer, I haven't looked at their work to closely.
I think the process they use to select the hyperparameters overfits on the labels in the validation set. The true size of the labelled data includes the 100 from the training set and all labels in the validation set.
>> One-shot learning, or learning from a few examples, is a field where we're making rapid progress, which means that in the near future, we'll obtain much higher accuracy on smaller datasets.
I'm really not convinced by one-shot learning, or rather I really don't see how it is possible to show that any technique used generalises well to unseen data, when you're supposed to have access to only very little data during development.
Even with very thorough cross-validation, if your development (training, test and validation) set are altogether, say, 0.1 of the unseen data you hope to predict, your validation results are going to be completely meaningless.
I think there are a lot of programmers try playing around with "deep learning" and it doesn't work for them. But they lack the knowledge necessary to make it work, such as calculus, statistics, signal processing theory, ect.
Robin's post reveals a couple fundamental misunderstandings. While he may be correct that, for now, many small firms should apply linear regression rather than deep learning to their limited datasets, he is wrong in his prediction of an AI bust. If it happens, it will not be for the reasons he cites.
He is skeptical that deep learning and other forms of advanced AI 1) will be applicable to smaller and smaller datasets, and that 2) they will become easier to use.
And yet some great research is being done that will prove him wrong on his first point.
https://arxiv.org/abs/1605.06065 https://arxiv.org/abs/1606.04080
One-shot learning, or learning from a few examples, is a field where we're making rapid progress, which means that in the near future, we'll obtain much higher accuracy on smaller datasets. So the immense performance gains we've seen by applying deep learning to big data will someday extend to smaller data as well.
Secondly, Robin is skeptical that deep learning will be a tool most firms can adopt, given the lack of specialists. For now, that talent is scarce and salaries are high. But this is a problem that job markets know how to fix. The data science academies popping up in San Francisco exist for a reason: to satisfy that demand.
And to go one step further, the history of technology suggests that we find ways to wrap powerful technology in usable packages for less technical people. AI is going to be just one component that fits into a larger data stack, infusing products invisibly until we don't even think about it.
And fwiw, his phrase "deep machine learning" isn't a thing. Nobody says that, because it's redundant. All deep learning is a subset of machine learning.