谷歌浏览器插件
订阅小程序
在清言上使用

Discriminative Attribution from Counterfactuals

arXiv (Cornell University)(2021)

引用 0|浏览29
暂无评分
摘要
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations to generate attribution maps that highlight the most discriminative features between pairs of classes. We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner, thus preventing potential observer bias. We evaluate the proposed method on three diverse datasets, including a challenging artificial dataset and real-world biological data. We show quantitatively and qualitatively that the highlighted features are substantially more discriminative than those extracted using conventional attribution methods and argue that this type of explanation is better suited for understanding fine grained class differences as learned by a deep neural network.
更多
查看译文
关键词
Machine Learning Interpretability,Interpretable Models,Model Interpretability,Representation Learning,Transfer Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要