The ideas from this article are really cool, and the design is beautiful. I see these techniques as providing the ability to partially interpret models. While clearly useful to practitioners seeking an intuition for what their models learn, it appears we are still very from the ability to thoroughly audit deep learning computer vision models.
I wonder if in the long run, making models that are both effective and interpretable can be done by first building a black box model, and then interpreting as much as possible it using clever ideas like those from the article. The interpretations of the black box model can inform the design of a relatively simple bespoke model. The bespoke model may never outperform the black box at prediction tasks, but in many applications the ability to perform audits and estimate of uncertainty should be worth it.
I wonder if in the long run, making models that are both effective and interpretable can be done by first building a black box model, and then interpreting as much as possible it using clever ideas like those from the article. The interpretations of the black box model can inform the design of a relatively simple bespoke model. The bespoke model may never outperform the black box at prediction tasks, but in many applications the ability to perform audits and estimate of uncertainty should be worth it.