This isn't how it works, at all. You're going to get better results with 1,000 parameters than with 100 explainable ones. There's a limit to how much humans can understand. We use machine learning to surpass that limit
> You're going to get better results with 1,000 parameters than with 100 explainable ones.
This is not always true.
Many problems are modeled very well with less than 100 parameters and adding more is of little-to-no benefit.
Many problems are naturally hierarchical such that simple models can be combined to yield a large number of explainable parameters. If done well, this can result in a high-performing solution. Admittedly, this is usually harder than just applying a blackbox.
In critical applications, an explainable model with benign failure modes (even if it has worse overall performance), can be far preferable to a blackbox with wildly unpredictable failure modes. From a utility standpoint, the explainable results are better.
> There's a limit to how much humans can understand. We use machine learning to surpass that limit
We can also work to improve our ability to discover and understand. I think that holds far more promise than improving our ability to do things we don't understand.