No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML
CHI '20: CHI Conference on Human Factors in Computing Systems Honolulu HI USA April, 2020, pp. 1-13, 2020.
Automatically generated explanations of how machine learning (ML) models reason can help users understand and accept them. However, explanations can have unintended consequences: promoting over-reliance or undermining trust. This paper investigates how explanations shape users' perceptions of ML models with or without the ability to provi...更多
下载 PDF 全文 (上传PDF)