Explanatory Dialogs: Towards Actionable, Interactive Explanations

PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18)(2018)

引用 4|浏览47
暂无评分
摘要
Adoption of AI systems in high-stakes domains (e.g., transportation, law, and healthcare) demands that human users trust these systems. A desiderata for establishing trust is that the users understand the system's decision process. However, a high-performing system may use a complex decision process, which may not be interpretable by itself. We argue that existing solutions for generating interpretable explanations have limitations and as a solution, propose developing new explanation systems that enable interactive and actionable dialogs between the user and the system.
更多
查看译文
关键词
Explainable AI
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要