The Riemannian Geometry of Deep Generative Models
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)(2017)
摘要
Deep generative models learn a mapping from a low dimensional latent space to a high-dimensional data space. Under certain regularity conditions, these models parameterize nonlinear manifolds in the data space. In this paper, we investigate the Riemannian geometry of these generated manifolds. First, we develop efficient algorithms for computing geodesic curves, which provide an intrinsic notion of distance between points on the manifold. Second, we develop an algorithm for parallel translation of a tangent vector along a path on the manifold. We show how parallel translation can be used to generate analogies, i.e., to transport a change in one data point into a semantically similar change of another data point. Our experiments on real image data show that the manifolds learned by deep generative models, while nonlinear, are surprisingly close to zero curvature. The practical implication is that linear paths in the latent space closely approximate geodesics on the generated manifold. However, further investigation into this phenomenon is warranted, to identify if there are other architectures or datasets where curvature plays a more prominent role. We believe that exploring the Riemannian geometry of deep generative models, using the tools developed in this paper, will be an important step in understanding the high-dimensional, nonlinear spaces these models learn.
更多查看译文
关键词
Riemannian geometry,deep generative models,low-dimensional latent space,high-dimensional data space,nonlinear manifolds,parallel translation,geodesic curves,tangent vector,real image data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络