Disentangled Representation如果在处理复杂数据时,我们能把表现较好的神经网络和人工建模方法结合起来,可解释性、生成和操作对象的能力、无监督特征学习和零样本学习的问题,都可以在一定程度上得到解决。对于微分方程和其他人工建模方法来说,图像处理很难进行,但通过和深度学习进行结合,上述模型允许我们进行对象的生成和操作、可解释性强,最重要的是,该模型可以在其他数据集上完成相同的工作。模型中的特征虽然具有可解释性,但特征之间是相关联的,换句话说,这些特征是耦合在一起的。这个时候实现解耦表示十分重要,也就是让嵌入中的每个元素对应一个单独的影响因素,并能够将该嵌入用于分类、生成和零样本学习。
RML@ICLR, (2019)
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. In this paper, we provide a sober look on recent progress in ...
Cited by169BibtexViews209
1
0
pp.2525-2534 (2019)
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard ob...
Cited by67BibtexViews152
0
0
Sjoerd van Steenkiste,Francesco Locatello,Jürgen Schmidhuber, Olivier Bachem
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), (2019): 14222-14235
In this work we investigated whether disentangled representations allow one to learn good models for non-trivial down-stream tasks with fewer samples
Cited by55BibtexViews306
0
0
THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF AR..., (2019): 3175-3182
We address the problem of unsupervised disentanglement of latent representations learnt via deep generative models. In contrast to current approaches that operate on the evidence lower bound (ELBO), we argue that statistical independence in the latent space of VAEs can be enforce...
Cited by8BibtexViews95
0
0
Spyros Gidaris, Praveer Singh,Nikos Komodakis
ICLR, (2018)
In our work we propose to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input
Cited by491BibtexViews269
0
0
R. Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal,Adam Trischler,Yoshua Bengio
arXiv: Machine Learning, (2018)
Our results show that infoNCE tends to perform best, but differences between infoNCE and Jensen-Shannon divergence diminish with larger datasets
Cited by400BibtexViews392
0
0
Hyunjik Kim,Andriy Mnih
ICML, (2018)
We have introduced FactorVAE, a novel method for disentangling that achieves better disentanglement scores than β-Variational Autoencoder on the 2D Shapes and 3D Shapes data sets for the same reconstruction quality
Cited by274BibtexViews168
0
0
NeurIPS, (2018): 2610-2620
We designate a special case as β-TCVAE, which can be trained stochastically using minibatch estimator with no additional hyperparameters compared to the β-variational autoencoder
Cited by260BibtexViews285
0
0
Alexander A. Alemi,Ben Poole,Ian Fischer, Joshua V. Dillon, Rif A. Saurous,Kevin Murphy
ICML, pp.159-168, (2018)
We examine several variational autoencoder model architectures that have been proposed in the literature
Cited by240BibtexViews255
0
0
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), (2018): 1287-1298
We investigate and impose structure to the specific representation learned in image-to-image translation models
Cited by123BibtexViews295
0
0
Jun-Ting Hsieh, Bingbin Liu,De-An Huang,Li Fei-Fei,Juan Carlos Niebles
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), (2018): 517-526
With an appropriately specified structural model, Disentangled Predictive Auto-Encoder is able to learn both the video decomposition and disentanglement that are effective for video prediction without any explicit supervision on these latent variables
Cited by118BibtexViews288
0
0
Irina Higgins, Nicolas Sonnerat, Loic Matthey, Arka Pal, Christopher P. Burgess,Matko Bosnjak,Murray Shanahan,Matthew Botvinick,Demis Hassabis, Alexander Lerchner
international conference on learning representations, (2018)
The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and rep...
Cited by76BibtexViews141
0
0
ICML, pp.5656-5665, (2018)
We presented a minimalistic generative model for learning disentangled representations of high-dimensional time series
Cited by74BibtexViews174
0
0
Adam R. Kosiorek, Hyunjik Kim,Ingmar Posner,Yee Whye Teh
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), (2018)
We present Sequential Attend, Infer, Repeat, an interpretable deep generative model for videos of moving objects
Cited by70BibtexViews162
0
0
Emilien Dupont
neural information processing systems, pp.708-718, (2018)
We have proposed JointVAE, a framework for learning disentangled continuous and discrete representations in an unsupervised manner
Cited by52BibtexViews135
0
0
Alessandro Achille, Tom Eccles, Loïc Matthey, Christopher P. Burgess, Nick Watters, Alexander Lerchner,Irina Higgins
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), (2018): 9873-9883
We have demonstrated that Variational Autoencoder with Shared Embeddings can learn a disentangled representation of a sequence of datasets
Cited by47BibtexViews116
0
0
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), (2018): 118-129
We have presented visual object networks, a fully differentiable 3D-aware generative model for image and shape synthesis
Cited by45BibtexViews264
0
0
International Conference on Learning Representations, (2018)
Batch normalization was used for all convolutional networks
Cited by32BibtexViews181
0
0
international conference on artificial intelligence and statistics, (2018)
We find that under standard assumptions, the lower bound for Cor-relation Ex-planation shares the same mathematical form as the evidence lower bound used in variational autoencoders, suggesting that CorEx provides a dual information-theoretic perspective on representations learne...
Cited by30BibtexViews103
0
0
Giambattista Parascandolo, Mateo Rojas-Carulla, Niki Kilbertus,Bernhard Schölkopf
international conference on machine learning, (2018)
We reported promising results in an experiment based on image transformations; future work could study more complex settings and diverse domains
Cited by29BibtexViews223
0
0
小科