Motivating explanations in Bayesian networks using MAP-independence

International Journal of Approximate Reasoning(2023)

引用 2|浏览11
暂无评分
摘要
In decision support systems the motivation and justification of the system's diagnosis or classification is crucial for the acceptance of the system by the human user. In Bayesian networks a diagnosis or classification is typically formalized as the computation of the most probable joint value assignment to the hypothesis variables, given the observed values of the evidence variables (generally known as the MAP problem). While solving the MAP problem gives the most probable explanation of the evidence, the computation is a black box as far as the human user is concerned and it does not give additional insights that allow the user to appreciate and accept the decision. For example, a user might want to know to whether an unobserved variable could potentially (upon observation) impact the explanation, or whether it is irrelevant in this aspect. In this paper we introduce a new concept, MAP-independence, which tries to capture this notion of relevance, and explore its role towards a potential justification of an inference to the best explanation. We formalize several computational problems based on this concept and assess their computational complexity.
更多
查看译文
关键词
Bayesian networks,Most probable explanations,Relevance,Explainable AI,Computational complexity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要