NIF: A Framework for Quantifying Neural Information Flow in Deep Networks.
arXiv: Learning(2019)
摘要
In this paper, we present a new approach to interpreting deep learning models. More precisely, by coupling mutual information with network science, we explore how information flows through feed forward networks. We show that efficiently approximating mutual information via the dual representation of Kullback-Leibler divergence allows us to create an information measure that quantifies how much information flows between any two neurons of a deep learning model. To that end, we propose NIF, Neural Information Flow, a new metric for codifying information flow which exposes the internals of a deep learning model while providing feature attributions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络