Stop explaining black-box machine learning models for high stakes decisions and use interpretable models insteadblog.acolyer.org
Another great summary by the Morning Paper with two main takeaways.
A sharpening of your understanding of the difference between explainability and interpretability, and why the former may be problematic.
Let us stop calling approximations to black box model predictions explanations. For a model that does not use race explicitly, an automated explanation “This model predicts you will be arrested because you are black” is not a model of what the model is actually doing, and would be confusing to a judge, lawyer or defendant.
And some great pointers to techniques for creating truly interpretable models.
The belief that there is always a trade-off between accuracy and interpretability has led many researchers to forgo the attempt to produce an interpretable model.Read more...