Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
arXiv: Machine Learning, Volume abs/1611.05817, 2016.
At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a modelu0027s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the modelu0027s behavior,...More
PPT (Upload PPT)