Quantitative Evaluation of Machine Learning Explanations: A Human-Grounded Benchmark

IUI(2021)

引用 23|浏览6
暂无评分
摘要
ABSTRACTResearch in interpretable machine learning proposes different computational and human subject approaches to evaluate model saliency explanations. These approaches measure different qualities of explanations to achieve diverse goals in designing interpretable machine learning systems. In this paper, we propose a benchmark for image and text domains using multi-layer human attention masks aggregated from multiple human annotators. We then present an evaluation study to compare model saliency explanations obtained using Grad-cam and LIME techniques to human understanding and acceptance. We demonstrate our benchmark’s utility for quantitative evaluation of model explanations by comparing it with human subjective ratings and ground-truth single-layer segmentation masks evaluations. Our study results show that our threshold agnostic evaluation method with the human attention baseline is more effective than single-layer object segmentation masks to ground truth. Our experiments also reveal user biases in the subjective rating of model saliency explanations.
更多
查看译文
关键词
machine learning explanations, explanation evaluation, explanation benchmark, data annotation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要