Overdispersed Variational Autoencoders

2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2017)

引用 1|浏览40
暂无评分
摘要
The ability to fit complex generative probabilistic models to data is a key challenge in AI. Currently, variational methods are popular, but remain difficult to train due to high variance of the sampling methods employed. We introduce the overdispersed variational autoencoder and overdispersed importance weighted autoencoder, which combine overdispersed black box variational inference with the variational autoencoder and importance weighted autoencoder respectively. We use the log likelihood lower bounds and reparametrisation trick from the variational and importance weighted autoencoders, but rather than drawing samples from the variational distribution itself, we use importance sampling to draw samples from an overdispersed (i.e. heavier-tailed) proposal in the same family as the variational distribution. We run experiments on two different datasets, and show that this technique produces a lower variance estimate of the gradients, and reaches a higher bound on the log likelihood of the observed data.
更多
查看译文
关键词
overdispersed variational autoencoders,complex generative probabilistic models,AI,sampling methods,overdispersed importance weighted autoencoder,overdispersed black box variational inference,importance weighted autoencoder,log likelihood lower bounds,reparametrisation trick,variational distribution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要