谷歌浏览器插件
订阅小程序
在清言上使用

Explainable Recommendations and Calibrated Trust: Two Systematic User Errors

Computer(2021)

引用 11|浏览7
暂无评分
摘要
The increased adoption of collaborative human-artificial intelligence decision-making tools triggered a need to explain recommendations for safe and effective collaboration. We explore how users interact with explanations and why trust-calibration errors occur, taking clinical decision-support systems as a case study.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要