Sharing Deep Neural Network Models with Interpretation.

WWW '18: The Web Conference 2018 Lyon France April, 2018(2018)

引用 22|浏览73
暂无评分
摘要
Despite outperforming humans in many tasks, deep neural network models are also criticized for the lack of transparency and interpretability in decision making. The opaqueness results in uncertainty and low confidence when deploying such a model in model sharing scenarios, where the model is developed by a third party. For a supervised machine learning model, sharing training process including training data is a way to gain trust and to better understand model predictions. However, it is not always possible to share all training data due to privacy and policy constraints. In this paper, we propose a method to disclose a small set of training data that is just sufficient for users to get the insight into a complicated model. The method constructs a boundary tree using selected training data and the tree is able to approximate the complicated deep neural network models with high fidelity. We show that data point pairs in the tree give users significantly better understanding of the model decision boundaries and paves the way for trustworthy model sharing.
更多
查看译文
关键词
Deep Neural Networks, Model Sharing, Interpretability, Decision Boundary
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要