Leaving interpretability and conspiracy theories about softy PhDs aside, for a bit, "SOTA advances" are not progress. e.g., in the last seven years since AlexNet, results have kept creeping upwards by tiny fractions on the same old datasets (or training times and costs have gone down) all of which is achieved by slight tweaks to the basic CNN architecture and perhaps better training techniques, or, of course more compute. But there has not been any substantial progress in fundamental algorithmic techniques- no new alternative to backpropagation, no radically new architectures that go beyond convolutional filters.
But I'll let Geoff Hinton himself explain why the reliance on state-of-the-art results is effectively hamstringing progress in the field:
GH: One big challenge the community faces is that if you want to get a paper published in machine learning now it's got to have a table in it, with all these different data sets across the top, and all these different methods along the side, and your method has to look like the best one. If it doesn’t look like that, it’s hard to get published. I don't think that's encouraging people to think about radically new ideas.
Now if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted, because it's going to get some junior reviewer who doesn't understand it. Or it’s going to get a senior reviewer who's trying to review too many papers and doesn't understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that's really bad.
What we should be going for, particularly in the basic science conferences, is radically new ideas. Because we know a radically new idea in the long run is going to be much more influential than a tiny improvement. That's I think the main downside of the fact that we've got this inversion now, where you've got a few senior guys and a gazillion young guys.
> no new alternative to backpropagation, no radically new architectures that go beyond convolutional filters.
Attention
My point wasn't about lack of investment/propagation of fundamental research that is not trendy, it was about hijacking what should be science by "softy PhDs" that found a niche in less demanding areas and will likely impose their will over the ones who are doing hard science and not politics, like how CoCs were recently used to take control over open source/free software licenses by some fringe non-technical groups. It's a pattern that is repeating across all industry and academia in the past, the ones that move field forward are often displaced by their "soft-skilled" and less-capable peers.
But I'll let Geoff Hinton himself explain why the reliance on state-of-the-art results is effectively hamstringing progress in the field:
GH: One big challenge the community faces is that if you want to get a paper published in machine learning now it's got to have a table in it, with all these different data sets across the top, and all these different methods along the side, and your method has to look like the best one. If it doesn’t look like that, it’s hard to get published. I don't think that's encouraging people to think about radically new ideas.
Now if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted, because it's going to get some junior reviewer who doesn't understand it. Or it’s going to get a senior reviewer who's trying to review too many papers and doesn't understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that's really bad.
What we should be going for, particularly in the basic science conferences, is radically new ideas. Because we know a radically new idea in the long run is going to be much more influential than a tiny improvement. That's I think the main downside of the fact that we've got this inversion now, where you've got a few senior guys and a gazillion young guys.
https://www.wired.com/story/googles-ai-guru-computers-think-...