谷歌浏览器插件
订阅小程序
在清言上使用

Consistent Sufficient Explanations and Minimal Local Rules for explaining the decision of any classifier or regressor

NeurIPS 2022(2022)

引用 3|浏览24
暂无评分
摘要
To explain the decision of any regression and classification model, we extend the notion of probabilistic sufficient explanations (P-SE). For each instance, this approach selects the minimal subset of features that is sufficient to yield the same prediction with high probability, while removing other features. The crux of P-SE is to compute the conditional probability of maintaining the same prediction. Therefore, we introduce an accurate and fast estimator of this probability via random Forests for any data $(\boldsymbol{X}, Y)$ and show its efficiency through a theoretical analysis of its consistency. As a consequence, we extend the P-SE to regression problems. In addition, we deal with non-discrete features, without learning the distribution of $\boldsymbol{X}$ nor having the model for making predictions. Finally, we introduce local rule-based explanations for regression/classification based on the P-SE and compare our approaches w.r.t other explainable AI methods. These methods are available as a Python Package.
更多
查看译文
关键词
Interpretability,Trustworthy ML,Robust and Reliable ML,rule-based models,learning theory,random forests,explainable ai,consistency,tree-based models
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要