Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Statistical Analysis of Wasserstein Autoencoders for Intrinsically Low-dimensional Data

ICLR(2024)

Cited 0|Views28
No score
Abstract
Variational Autoencoders (VAEs) have gained significant popularity amongresearchers as a powerful tool for understanding unknown distributions based onlimited samples. This popularity stems partly from their impressive performanceand partly from their ability to provide meaningful feature representations inthe latent space. Wasserstein Autoencoders (WAEs), a variant of VAEs, aim tonot only improve model efficiency but also interpretability. However, there hasbeen limited focus on analyzing their statistical guarantees. The matter isfurther complicated by the fact that the data distributions to which WAEs areapplied - such as natural images - are often presumed to possess an underlyinglow-dimensional structure within a high-dimensional feature space, whichcurrent theory does not adequately account for, rendering known boundsinefficient. To bridge the gap between the theory and practice of WAEs, in thispaper, we show that WAEs can learn the data distributions when the networkarchitectures are properly chosen. We show that the convergence rates of theexpected excess risk in the number of samples for WAEs are independent of thehigh feature dimension, instead relying only on the intrinsic dimension of thedata distribution.
More
Translated text
Key words
Wasserstein Autoencoders,Statistical Analysis,Error rates,Intrinsic Dimension
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined