Inverting Variational Autoencoders for Improved Generative Accuracy

arXiv: Learning(2016)

引用 23|浏览112
暂无评分
摘要
Recent advances in semi-supervised learning with deep generative models have shown promise in generalizing from small labeled datasets ($mathbf{x},mathbf{y}$) to large unlabeled ones ($mathbf{x}$). In the case where the codomain has known structure, a large unfeatured dataset ($mathbf{y}$) is potentially available. We develop a parameter-efficient, deep semi-supervised generative model for the purpose of exploiting this untapped data source. Empirical results show improved performance in disentangling latent variable semantics as well as improved discriminative prediction on Martian spectroscopic and handwritten digit domains.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要