Unpacking Information Bottlenecks: Unifying Information-Theoretic Objectives in Deep Learning
arxiv(2020)
摘要
The information bottleneck (IB) principle offers both a mechanism to explain how deep neural networks train and generalize, as well as a regularized objective with which to train models. However, multiple competing objectives have been proposed based on this principle. Moreover, the information-theoretic quantities in the objective are difficult to compute for large deep neural networks, and this limits its use as a training objective. In this work, we review these quantities, compare and unify previously proposed objectives and relate them to surrogate objectives more friendly to optimization. We find that these surrogate objectives allow us to apply the information bottleneck to modern neural network architectures. We demonstrate our insights on Permutation-MNIST, MNIST and CIFAR10.
更多查看译文
关键词
information bottlenecks,deep learning,information-theoretic
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络