Convergence of Continuous Normalizing Flows for Learning Probability Distributions
arxiv(2024)
摘要
Continuous normalizing flows (CNFs) are a generative method for learning
probability distributions, which is based on ordinary differential equations.
This method has shown remarkable empirical success across various applications,
including large-scale image synthesis, protein structure prediction, and
molecule generation. In this work, we study the theoretical properties of CNFs
with linear interpolation in learning probability distributions from a finite
random sample, using a flow matching objective function. We establish
non-asymptotic error bounds for the distribution estimator based on CNFs, in
terms of the Wasserstein-2 distance. The key assumption in our analysis is that
the target distribution satisfies one of the following three conditions: it
either has a bounded support, is strongly log-concave, or is a finite or
infinite mixture of Gaussian distributions. We present a convergence analysis
framework that encompasses the error due to velocity estimation, the
discretization error, and the early stopping error. A key step in our analysis
involves establishing the regularity properties of the velocity field and its
estimator for CNFs constructed with linear interpolation. This necessitates the
development of uniform error bounds with Lipschitz regularity control of deep
ReLU networks that approximate the Lipschitz function class, which could be of
independent interest. Our nonparametric convergence analysis offers theoretical
guarantees for using CNFs to learn probability distributions from a finite
random sample.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要