Unjustified Sample Sizes and Generalizations in Explainable Artificial Intelligence Research: Principles for More Inclusive User Studies

IEEE INTELLIGENT SYSTEMS(2023)

引用 0|浏览0
暂无评分
摘要
Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (n = 220) published between 2012 and 2022. Most of the studies did not offer rationales for their sample sizes. Moreover, most of the papers generalized their conclusions beyond their study population, and there was no evidence that broader conclusions in quantitative studies were correlated with larger samples. These methodological problems can impede evaluations of whether XAI systems implement the explainability called for in ethical frameworks. We outline principles for more inclusive XAI user studies.
更多
查看译文
关键词
Artificial intelligence,Statistics,Sociology,Ethics,Intelligent systems,Systematics,Stakeholders
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要