This is an incredibly salient point. Others have also pointed out that there are numerous applications in which black box models appear to offer significantly greater accuracy than interpretable models, bolstering the notion that this article is a bit overstated.
However, in this article and elsewhere Professor Rudin has cited compelling evidence of cases in which black box models have been demonstrated to be no more accurate than interpretable alternatives. I feel this fairly justifies the question in the title of the article. For example, based upon available evidence, it appears reasonable that some onus should lie on the creators and buyers of COMPAS (a proprietary black box recidivism model) to demonstrate COMPAS actually is more accurate than an interpretable baseline. While it may not be the case, as the article seems to suggest, that in all modeling cases there is an interpretable alternative with comparable accuracy, in cases which there is, there doesn't seem to be any justification for using a black box model.
On the matter of "human-style" interpretability, we are brought to the difference between "interpretability" and "explainability." Humans have a complex capacity for constructing explanations for the thoughts and actions of ourselves and others (among other things). As OP points out, a lot of famous psychological experiments by Kahneman and others have shown how much of our reasoning appears to be post-hoc, often biased, and often inaccurate (in other words, human explanations are not actually true transparent interpretations of our thoughts and actions). However, we humans do have a powerful capacity to evaluate and challenge the explanations presented by others, and we are able to reject bad explanations. For those interested, a great book on this topic is "The Enigma of Reason" by Mercier and Sperber (https://www.hup.harvard.edu/catalog.php?isbn=9780674237827), but the gist here is that we must understand that while explanations are not the same as transparent interpretability, they are still useful.
I would conjecture that at some level of complexity (which some predictive tasks like pixel-to-label image recognition seem to exhibit), true end-to-end interpretability is not possible -- the best we can do is to construct an explanation. However, two very important points should be observed when considering this conjecture:
1. (Professor Rudin's point in the article) In cases which are not too complex for interpretable models to achieve comparable accuracy to black-box models, we can and should use them, as they offer super-human transparency at no cost in accuracy.
2. Constructing no explanations (or bad explanations) is not the same as reaching the same level of semi-transparency that humans offer. If we want to use human interpretability as a benchmark, black box models with no explanations are not up to par.
However, in this article and elsewhere Professor Rudin has cited compelling evidence of cases in which black box models have been demonstrated to be no more accurate than interpretable alternatives. I feel this fairly justifies the question in the title of the article. For example, based upon available evidence, it appears reasonable that some onus should lie on the creators and buyers of COMPAS (a proprietary black box recidivism model) to demonstrate COMPAS actually is more accurate than an interpretable baseline. While it may not be the case, as the article seems to suggest, that in all modeling cases there is an interpretable alternative with comparable accuracy, in cases which there is, there doesn't seem to be any justification for using a black box model.
On the matter of "human-style" interpretability, we are brought to the difference between "interpretability" and "explainability." Humans have a complex capacity for constructing explanations for the thoughts and actions of ourselves and others (among other things). As OP points out, a lot of famous psychological experiments by Kahneman and others have shown how much of our reasoning appears to be post-hoc, often biased, and often inaccurate (in other words, human explanations are not actually true transparent interpretations of our thoughts and actions). However, we humans do have a powerful capacity to evaluate and challenge the explanations presented by others, and we are able to reject bad explanations. For those interested, a great book on this topic is "The Enigma of Reason" by Mercier and Sperber (https://www.hup.harvard.edu/catalog.php?isbn=9780674237827), but the gist here is that we must understand that while explanations are not the same as transparent interpretability, they are still useful.
I would conjecture that at some level of complexity (which some predictive tasks like pixel-to-label image recognition seem to exhibit), true end-to-end interpretability is not possible -- the best we can do is to construct an explanation. However, two very important points should be observed when considering this conjecture: 1. (Professor Rudin's point in the article) In cases which are not too complex for interpretable models to achieve comparable accuracy to black-box models, we can and should use them, as they offer super-human transparency at no cost in accuracy. 2. Constructing no explanations (or bad explanations) is not the same as reaching the same level of semi-transparency that humans offer. If we want to use human interpretability as a benchmark, black box models with no explanations are not up to par.