Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

arXiv: Machine Learning, Volume abs/1611.05817, 2016.

Cited by: 37|Bibtex|Views100
EI
Other Links: dblp.uni-trier.de|academic.microsoft.com|arxiv.org

Abstract:

At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a modelu0027s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the modelu0027s behavior,...More

Code:

Data:

Full Text
Your rating :
0

 

Tags
Comments