PatternNet and PatternLRP - Improving the interpretability of neural networks.
arXiv: Machine Learning(2017)
摘要
Deep learning has significantly advanced the state of the art in machine learning. However, neural networks are often considered black boxes. There is significant effort to develop techniques that explain a classifieru0027s decisions. Although some of these approaches have resulted in compelling visualisations, there is a lack of theory of what is actually explained. Here we present an analysis of these methods and formulate a quality criterion for explanation methods. On this ground, we propose an improved method that may serve as an extension for existing back-projection and decomposition techniques.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络