Learning deep representations by mutual information estimation and maximization
International Conference on Learning Representations(2018)
摘要
In this work, we perform unsupervised learning of representations by
maximizing mutual information between an input and the output of a deep neural
network encoder. Importantly, we show that structure matters: incorporating
knowledge about locality of the input to the objective can greatly influence a
representation's suitability for downstream tasks. We further control
characteristics of the representation by matching to a prior distribution
adversarially. Our method, which we call Deep InfoMax (DIM), outperforms a
number of popular unsupervised learning methods and competes with
fully-supervised learning on several classification tasks. DIM opens new
avenues for unsupervised learning of representations and is an important step
towards flexible formulations of representation-learning objectives for
specific end-goals.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要