It's not that accuracy will always be sacrificed if one wants an explainable model. The point is: if interpretability is an important constraint, it could prevent improvements on accuracy.
Sometimes, the best interpretable model is as good as a black box, and that's great.
When this is not the case, the trade-off is that one should see what's more important for the actual problem. Perhaps interpretability is not a big deal.
Another solution is to try to extract interpretability from the more accurate black box model with something like SHAP.
This is a great point. There is a general lack of understanding about what it means for models to be interpretable & explainable. These words get thrown around often by people who don't understand the definition, and also the trade off with accuracy.
Sometimes, the best interpretable model is as good as a black box, and that's great.
When this is not the case, the trade-off is that one should see what's more important for the actual problem. Perhaps interpretability is not a big deal.
Another solution is to try to extract interpretability from the more accurate black box model with something like SHAP.