DeltaExplainer: A Software Debugging Approach to Generating Counterfactual Explanations

2022 IEEE International Conference On Artificial Intelligence Testing (AITest)(2022)

引用 0|浏览16
暂无评分
摘要
The profound black-box nature of Machine Learning (ML) based Artificial Intelligence (AI) systems leads to the problem of interpretability. Explainable Artificial Intelligence (XAI) tries to provide explanations to human users to understand the decisions made by ML-based systems. In this paper, we propose a software debugging-based approach called DeltaExplainer for generating counterfactual explanations for predictions made by ML models. The key insight of our approach is that the problem of XAI is similar to the problem of software debugging. We evaluate DeltaExplainer on eight ML models trained using real-world datasets. We compare DeltaExplainer to two state-of-the-art counterfactual explanation tools, i.e., DiCE and GeCo. Our experimental results suggest that the proposed approach can successfully generate counterfactual explanations and, in most cases, generate better explanations than DiCE and GeCo.
更多
查看译文
关键词
Explainable AI,Debugging,XAI,DeltaDebugging,Counterfactuals
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要