DAVA: Disentangling Adversarial Variational Autoencoder

ICLR 2023(2023)

引用 3|浏览31
暂无评分
摘要
The use of well-disentangled representations poses many advantages for downstream tasks, e.g. increasing sample efficiency, or enabling interpretability. Their quality is, however, determined to a large extent by the choice of dataset-specific hyperparameters, most notably the regularization strength. To address the issue, we introduce DAVA, a novel training procedure for variational auto-encoders that alleviates the issue of hyperparameter selection at the cost of a comparatively small overhead. We compare DAVA against models with optimal choice of hyperparameters. Without any hyperparameter tuning, DAVA is competitive across a diverse range of commonly used datasets. Further, even under an adequate set of hyperparameters, the success of the disentanglement process remains heavily influenced by randomness in network initialization. We therefore present the new unsupervised PIPE disentanglement metric, capable of evaluating representation quality. We demonstrate the PIPE metrics ability to positively predict performance of downstream models in abstract reasoning. We also exhaustively examine correlations with existing supervised and unsupervised metrics.
更多
查看译文
关键词
Disentanglement learning,varational auto-encoder,curriculum learning,generative adversarial networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要