> IMO it attracts people that can't come with SOTA advances in real problems and its their "easier, vague target" to hit and finish their PhDs while getting published in top journals.
I'm also pretty wary of interpretability/explainability research in AI. Work on robustness and safety tends to be a bit better (those communities at least mathematically characterize their goals and contributions, and propose reasonable benchmarks).
But I'm also skeptical of a lot of modern deep learning research in general.
In particular, your critique goes both directions.
If I had a penny for every dissertation in the past few years that boiled down to "I built an absurdly over-fit/wrongly-fit model in domain D and claimed it beats SoTA in that domain. Unfortunately, I never took a course about D and ignored or wildly misused that domain's competitions/benchmarks. No one in that community took my amazing work seriously, so I submitted to NeurIPS/AAAI/ICML/IJCAI/... instead. On the Nth resubmission I got some reviewers who don't know anything about D but lose their minds over anything with the word deep (conv, residual, variational, adversarial, ... depending on the year) in the title. So, now I have a PhD in 'AI for D' but everyone doing research in D rolls their eyes at my work."
> Those same people will likely at some point call for a strict regulation of AI...
The most effectual calls for regulation of the software industry will not come from technologists. The call will come from politicians in the vein of, e.g., Josh Hawley or Elizabeth Warren. Those politicians have very specific goals and motivations which do not align with those of researchers doing interpretability/explainability research. If the tech industry is regulated, it's extremely unlikely that those regulations will be based upon proposals from STEM PhDs. At least in the USA.
> faking results of their interpretable models
Jumping from "this work is probably not valuable" to "this entire research community are a bunch of fraudsters" is a pretty big jump. Do you have any evidence of this happening?
> If I had a penny for every dissertation in the past few years that boiled down to...
This is very, very accurate. On the other hand, I oftentimes see field-specific papers from field experts with little ML experience using very basic and unnecessary ML techniques, which are then blown out of the water when serious DL researchers give the problem a shot.
One field that comes to mind where I have really noticed this problem is genomics.
I'm also pretty wary of interpretability/explainability research in AI. Work on robustness and safety tends to be a bit better (those communities at least mathematically characterize their goals and contributions, and propose reasonable benchmarks).
But I'm also skeptical of a lot of modern deep learning research in general.
In particular, your critique goes both directions.
If I had a penny for every dissertation in the past few years that boiled down to "I built an absurdly over-fit/wrongly-fit model in domain D and claimed it beats SoTA in that domain. Unfortunately, I never took a course about D and ignored or wildly misused that domain's competitions/benchmarks. No one in that community took my amazing work seriously, so I submitted to NeurIPS/AAAI/ICML/IJCAI/... instead. On the Nth resubmission I got some reviewers who don't know anything about D but lose their minds over anything with the word deep (conv, residual, variational, adversarial, ... depending on the year) in the title. So, now I have a PhD in 'AI for D' but everyone doing research in D rolls their eyes at my work."
> Those same people will likely at some point call for a strict regulation of AI...
The most effectual calls for regulation of the software industry will not come from technologists. The call will come from politicians in the vein of, e.g., Josh Hawley or Elizabeth Warren. Those politicians have very specific goals and motivations which do not align with those of researchers doing interpretability/explainability research. If the tech industry is regulated, it's extremely unlikely that those regulations will be based upon proposals from STEM PhDs. At least in the USA.
> faking results of their interpretable models
Jumping from "this work is probably not valuable" to "this entire research community are a bunch of fraudsters" is a pretty big jump. Do you have any evidence of this happening?