Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not that accuracy will always be sacrificed if one wants an explainable model. The point is: if interpretability is an important constraint, it could prevent improvements on accuracy.

Sometimes, the best interpretable model is as good as a black box, and that's great.

When this is not the case, the trade-off is that one should see what's more important for the actual problem. Perhaps interpretability is not a big deal.

Another solution is to try to extract interpretability from the more accurate black box model with something like SHAP.



This is a great point. There is a general lack of understanding about what it means for models to be interpretable & explainable. These words get thrown around often by people who don't understand the definition, and also the trade off with accuracy.

Some papers i found interesting on the subject:

https://arxiv.org/abs/1606.03490

https://arxiv.org/abs/1707.03886

https://arxiv.org/abs/1806.07552

https://arxiv.org/abs/1702.08608 (i found this was a good sumary of the issues)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: