Explainability is not a given in many more traditional complex systems. Decisions are often an aggregation of a large number of signals, and one can often not conceive of a single intuitive explanation for the system's decisions.
A lot is expected of AI systems today, from fairness (how do we even define that?) to universality. In my view we need to develop a practical understanding of what it means to build the system we have in mind: do I understand where I want my system to perform, and do I have the tools to assess whether I am getting there? Interpretability is orthogonal to all of this.
I would much rather have a well tested system, accompanied by online monitoring to detect unusual inputs in an ever-changing data distribution and notify when updates are needed or a human needs to take control, than an unreliable system that is great at providing explanations.
Is that really a universal fact? In any case, my statement goes in the opposite direction: is a reliable system necessarily "well understood", in the sense that it can explain its decisions? Most complex systems powering our lives cannot tell us anything about how they made those decisions.
AI systems add a layer of complexity. Even if you can explain a decision well on your training data, I seriously doubt you will be able to still provide reasonable explanations in completely out of distribution data.
A lot is expected of AI systems today, from fairness (how do we even define that?) to universality. In my view we need to develop a practical understanding of what it means to build the system we have in mind: do I understand where I want my system to perform, and do I have the tools to assess whether I am getting there? Interpretability is orthogonal to all of this.
I would much rather have a well tested system, accompanied by online monitoring to detect unusual inputs in an ever-changing data distribution and notify when updates are needed or a human needs to take control, than an unreliable system that is great at providing explanations.