Learning Decision Trees Recurrently Through Communication

2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021(2021)

引用 14|浏览57
暂无评分
摘要
Integrated interpretability without sacrificing the prediction accuracy of decision making algorithms has the potential of greatly improving their value to the user. Instead of assigning a label to an image directly, we propose to learn iterative binary sub-decisions, inducing sparsity and transparency in the decision making process. The key aspect of our model is its ability to build a decision tree whose structure is encoded into the memory representation of a Recurrent Neural Network jointly learned by two models communicating through message passing. In addition, our model assigns a semantic meaning to each decision in the form of binary attributes, providing concise, semantic and relevant rationalizations to the user. On three benchmark image classification datasets, including the large-scale ImageNet, our model generates human interpretable binary decision sequences explaining the predictions of the network while maintaining state-of-the-art accuracy.
更多
查看译文
关键词
decision trees,integrated interpretability,prediction accuracy,decision making algorithms,iterative binary sub-decisions,inducing sparsity,transparency,decision making process,decision tree whose structure,memory representation,Recurrent Neural Network,message passing,model assigns,semantic meaning,binary attributes,relevant rationalizations,benchmark image classification datasets,human interpretable binary decision sequences
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要