How to Explain Individual Classification Decisions

Clinical Orthopaedics and Related Research(2010)

引用 1252|浏览172
暂无评分
摘要
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.
更多
查看译文
关键词
kernel methods,classification method,black box model,modern tool,explaining,single instance,ames mutagenicity,particular instance,particular label,unseen data point,explain individual classification decisions,decision tree,unseen data,black box,likely label,nonlinear
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要