Weakly Supervised Disentanglement with Guarantees

Abhishek Kumar
Abhishek Kumar
Ben Poole
Ben Poole

ICLR, 2020.

Cited by: 13|Bibtex|Views54
EI
Other Links: academic.microsoft.com|dblp.uni-trier.de|arxiv.org
Weibo:
We construct a theoretical framework to rigorously analyze the disentanglement guarantees of weak supervision algorithms

Abstract:

Learning disentangled representations that correspond to factors of variation in real-world data is critical to interpretable and human-controllable machine learning. Recently, concerns about the viability of learning disentangled representations in a purely unsupervised manner has spurred a shift toward the incorporation of weak supervis...More
Introduction
  • Many real-world datasets can be intuitively described via a data-generating process that first samples an underlying set of interpretable factors, and —conditional on those factors—generates an observed data point.
  • The goal of disentangled representation learning is to learn a representation where each dimension of the representation corresponds to a distinct factor of variation in the dataset (Bengio et al, 2013)
  • Learning such representations that align with the underlying factors of variation may be critical to the development of machine learning models that are explainable or human-controllable (Gilpin et al, 2018; Lee et al, 2019; Klys et al, 2018).
  • While existing methods based on weaklysupervised learning demonstrate empirical gains, there is no existing formalism for describing the theoretical guarantees conferred by different forms of weak supervision (Kulkarni et al, 2015; Reed et al, 2015; Bouchacourt et al, 2018)
Highlights
  • Many real-world datasets can be intuitively described via a data-generating process that first samples an underlying set of interpretable factors, and then—conditional on those factors—generates an observed data point
  • While existing methods based on weaklysupervised learning demonstrate empirical gains, there is no existing formalism for describing the theoretical guarantees conferred by different forms of weak supervision (Kulkarni et al, 2015; Reed et al, 2015; Bouchacourt et al, 2018)
  • We construct a theoretical framework to rigorously analyze the disentanglement guarantees of weak supervision algorithms
  • Our paper clarifies several important concepts, such as consistency and restrictiveness, that have been hitherto confused or overlooked in the existing literature, and provides a formalism that precisely distinguishes when disentanglement arises from supervision versus model inductive bias
  • We encourage the search for other learning algorithms that may have theoretical guarantees when paired with the right form of supervision
  • We hope that our framework enables the theoretical analysis of other promising weak supervision methods
Methods
  • We conducted experiments on five prominent datasets in the disentanglement literature: Shapes3D (Kim & Mnih, 2018), dSprites (Higgins et al, 2017), Scream-dSprites (Locatello et al, 2019), SmallNORB (LeCun et al, 2004), and Cars3D (Reed et al, 2015).
  • Since existing quantitative metrics of disentanglement all measure the performance of an encoder with respect to the true data generator, we trained an encoder post-hoc to approximately invert the learned generator, and measured all quantitative metrics on the encoder.
  • Our theory assumes that the learned generator must be invertible.
  • While this is not true for conventional GANs, our empirical results show that this is not an issue in practice
Conclusion
  • We construct a theoretical framework to rigorously analyze the disentanglement guarantees of weak supervision algorithms.
  • Our paper clarifies several important concepts, such as consistency and restrictiveness, that have been hitherto confused or overlooked in the existing literature, and provides a formalism that precisely distinguishes when disentanglement arises from supervision versus model inductive bias.
  • We hope that our formalism and experiments inspire greater theoretical and scientific scrutiny of the inductive biases present in existing models.
  • We hope that our framework enables the theoretical analysis of other promising weak supervision methods
Summary
  • Introduction:

    Many real-world datasets can be intuitively described via a data-generating process that first samples an underlying set of interpretable factors, and —conditional on those factors—generates an observed data point.
  • The goal of disentangled representation learning is to learn a representation where each dimension of the representation corresponds to a distinct factor of variation in the dataset (Bengio et al, 2013)
  • Learning such representations that align with the underlying factors of variation may be critical to the development of machine learning models that are explainable or human-controllable (Gilpin et al, 2018; Lee et al, 2019; Klys et al, 2018).
  • While existing methods based on weaklysupervised learning demonstrate empirical gains, there is no existing formalism for describing the theoretical guarantees conferred by different forms of weak supervision (Kulkarni et al, 2015; Reed et al, 2015; Bouchacourt et al, 2018)
  • Methods:

    We conducted experiments on five prominent datasets in the disentanglement literature: Shapes3D (Kim & Mnih, 2018), dSprites (Higgins et al, 2017), Scream-dSprites (Locatello et al, 2019), SmallNORB (LeCun et al, 2004), and Cars3D (Reed et al, 2015).
  • Since existing quantitative metrics of disentanglement all measure the performance of an encoder with respect to the true data generator, we trained an encoder post-hoc to approximately invert the learned generator, and measured all quantitative metrics on the encoder.
  • Our theory assumes that the learned generator must be invertible.
  • While this is not true for conventional GANs, our empirical results show that this is not an issue in practice
  • Conclusion:

    We construct a theoretical framework to rigorously analyze the disentanglement guarantees of weak supervision algorithms.
  • Our paper clarifies several important concepts, such as consistency and restrictiveness, that have been hitherto confused or overlooked in the existing literature, and provides a formalism that precisely distinguishes when disentanglement arises from supervision versus model inductive bias.
  • We hope that our formalism and experiments inspire greater theoretical and scientific scrutiny of the inductive biases present in existing models.
  • We hope that our framework enables the theoretical analysis of other promising weak supervision methods
Tables
  • Table1: We trained a probablistic Gaussian encoder to approximately invert the generative model. The encoder is not trained jointly with the generator, but instead trained separately from the generative model (i.e. encoder gradient does not backpropagate to generative model). During training, the encoder is only exposed to data generated by the learned generative model
  • Table2: Generative model architecture
  • Table3: Discriminator used for restricted labeling. Parts in red are part of hyperparameter search
  • Table4: Discriminator used for match pairing. We use a projection discriminator (<a class="ref-link" id="cMiyato_2018_a" href="#rMiyato_2018_a">Miyato & Koyama, 2018</a>) and thus have an unconditional and conditional head. Parts in red are part of hyperparameter search
  • Table5: Discriminator used for rank pairing. For rank-pairing, we use a special variant of the projection discriminator, where the conditional logit is computed via taking the difference between the two pairs and multiplying by y ∈ {−1, +1}. The discriminator is thus implicitly taking on the role of an adversarially trained encoder that checks for violations of the ranking rule in the embedding space. Parts in red are part of hyperparameter search
Download tables as Excel
Funding
  • Provides a theoretical framework to assist in analyzing the disentanglement guarantees conferred by weak supervision when coupled with learning algorithms based on distribution matching
  • Presents a comprehensive theoretical framework for weakly supervised disentanglement, and evaluate our framework on several datasets
  • Proposes a set of definitions for disentanglement that can handle correlated factors and are inspired by many existing definitions in the literature
  • Provides a conceptually useful and theoretically rigorous calculus of disentanglement
  • Shows that certain weak supervision methods do not guarantee disentanglement, our calculus can determine whether disentanglement is guaranteed when multiple sources of weak supervision are combined
Study subjects and analysis
prominent datasets: 5
We empirically verify the guarantees and limitations of several weak supervision methods (restricted labeling, match-pairing, and rank-pairing), demonstrating the predictive power and usefulness of our theoretical framework. We conducted experiments on five prominent datasets in the disentanglement literature: Shapes3D (Kim & Mnih, 2018), dSprites (Higgins et al, 2017), Scream-dSprites (Locatello et al, 2019), SmallNORB (LeCun et al, 2004), and Cars3D (Reed et al, 2015). Since some of the underlying factors are treated as nuisance variables in SmallNORB and Scream-dSprites, we show in Appendix C that our theoretical framework can be easily adapted accordingly to handle such situations

samples with respect to an underlying factor than to: 2
Rank Pairing is another form of paired data generation where the pairs (x, x ) are generated in an i.i.d. fashion, and an additional indicator variable y is observed that determines whether the corresponding latent si is greater than si: y = 1 {si ≥ si}. Such a form of supervision is effective when it is easier to compare two samples with respect to an underlying factor than to directly collect labels (e.g., comparing two object sizes versus providing a ruler measurement of an object). Although supervision via ranking features prominently in the metric learning literature (McFee & Lanckriet, 2010; Wang et al, 2014), our focus in this paper will be on rank pairing in the context of disentanglement guarantees

important observations: 2
Ep∗I eI ◦ g∗(sI , s\I ) − eI ◦ g∗(sI , s\I ) 2 = 0. We now make two important observations. First, a valuable trait of our encoder-based definitions is that one can check for encoder consistency / restrictiveness / disentanglement as long as one has access to match pairing data from the oracle generator

prominent datasets: 5
6.2 EXPERIMENTS. We conducted experiments on five prominent datasets in the disentanglement literature: Shapes3D (Kim & Mnih, 2018), dSprites (Higgins et al, 2017), Scream-dSprites (Locatello et al, 2019), SmallNORB (LeCun et al, 2004), and Cars3D (Reed et al, 2015). Since some of the underlying factors are treated as nuisance variables in SmallNORB and Scream-dSprites, we show in Appendix C that our theoretical framework can be easily adapted accordingly to handle such situations

special cases: 2
We empirically verify that single-factor consistency or restrictiveness can be achieved with the supervision methods of interest. Note there are two special cases of match pairing: one where Si is. Normalized Consistency Score at Factor i

kinds of datasets: 2
In practice, it may be easier to acquire paired data where multiple factors change simultaneously. If we have access to two kinds of datasets, one where SI are changed and one where SJ are changed, our calculus predicts that training on both datasets will guarantee restrictiveness on SI∩J. The final heatmap shows six such intersection settings and measures the normalized restrictiveness score; in all but one setting, the results are consistent with our theory

Reference
  • Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
    Google ScholarLocate open access versionFindings
  • Diane Bouchacourt, Ryota Tomioka, and Sebastian Nowozin. Multi-level variational autoencoder: Learning disentangled representations from grouped observations. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
    Google ScholarLocate open access versionFindings
  • Junxiang Chen and Kayhan Batmanghelich. Weakly supervised disentanglement by pairwise similarities. arXiv preprint arXiv:1906.01044, 2019.
    Findings
  • Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. Advances in Neural Information Processing Systems, pp. 2610–2620, 2018a.
    Google ScholarLocate open access versionFindings
  • Yutian Chen, Yannis Assael, Brendan Shillingford, David Budden, Scott Reed, Heiga Zen, Quan Wang, Luis C Cobo, Andrew Trask, Ben Laurie, et al. Sample efficient adaptive text-to-speech. arXiv preprint arXiv:1809.10460, 2018b.
    Findings
  • Cian Eastwood and Christopher KI Williams. A framework for the quantitative evaluation of disentangled representations. ICLR, 2018.
    Google ScholarLocate open access versionFindings
  • Babak Esmaeili, Hao Wu, Sarthak Jain, Alican Bozkurt, Narayanaswamy Siddharth, Brooks Paige, Dana H Brooks, Jennifer Dy, and Jan-Willem van de Meent. Structured disentangled representations. arXiv preprint arXiv:1804.02086, 2018.
    Findings
  • Aviv Gabbay and Yedid Hoshen. Latent optimization for non-adversarial representation disentanglement. arXiv preprint arXiv:1906.11796, 2019.
    Findings
  • Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Bajwa, Michael Specter, and Lalana Kagal. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pp. 80–8IEEE, 2018.
    Google ScholarLocate open access versionFindings
  • Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014.
    Google ScholarLocate open access versionFindings
  • Luigi Gresele, Paul K Rubenstein, Arash Mehrjou, Francesco Locatello, and Bernhard Scholkopf. The incomplete rosetta stone problem: Identifiability results for multi-view nonlinear ica. arXiv preprint arXiv:1905.06642, 2019.
    Findings
  • Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. ICLR, 2(5):6, 2017.
    Google ScholarLocate open access versionFindings
  • Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthey, Danilo Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. arXiv preprint arXiv:1812.02230, 2018.
    Findings
  • Hyunjik Kim and Andriy Mnih. Disentangling by factorising. ICML, 2018.
    Google ScholarLocate open access versionFindings
  • Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. Advances in neural information processing systems, pp. 3581–3589, 2014.
    Google ScholarFindings
  • Jack Klys, Jake Snell, and Richard Zemel. Learning latent subspaces in variational autoencoders. In Advances in Neural Information Processing Systems, pp. 6444–6454, 2018.
    Google ScholarLocate open access versionFindings
  • Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse graphics network. In Advances in neural information processing systems, pp. 2539–2547, 2015.
    Google ScholarLocate open access versionFindings
  • Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. In ICLR, 2018.
    Google ScholarLocate open access versionFindings
  • Hsin-Ying Lee, Hung-Yu Tseng, Qi Mao, Jia-Bin Huang, Yu-Ding Lu, Maneesh Singh, and MingHsuan Yang. Drit++: Diverse image-to-image translation via disentangled representations. arXiv preprint arXiv:1905.01270, 2019.
    Findings
  • Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Scholkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. ICML, 2019.
    Google ScholarLocate open access versionFindings
  • Brian McFee and Gert R Lanckriet. Metric learning to rank. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pp. 775–782, 2010.
    Google ScholarLocate open access versionFindings
  • Takeru Miyato and Masanori Koyama. cgans with projection discriminator. arXiv preprint arXiv:1802.05637, 2018.
    Findings
  • Siddharth Narayanaswamy, T Brooks Paige, Jan-Willem Van de Meent, Alban Desmaison, Noah Goodman, Pushmeet Kohli, Frank Wood, and Philip Torr. Learning disentangled representations with semi-supervised deep generative models. In Advances in Neural Information Processing Systems, pp. 5925–5935, 2017.
    Google ScholarLocate open access versionFindings
  • Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy-making. In Advances in neural information processing systems, pp. 1252–1260, 2015.
    Google ScholarLocate open access versionFindings
  • Karl Ridgeway and Michael C Mozer. Learning deep disentangled embeddings with the f-statistic loss. Advances in Neural Information Processing Systems, pp. 185–194, 2018.
    Google ScholarLocate open access versionFindings
  • Raphael Suter, Dorde Miladinovic, Stefan Bauer, and Bernhard Scholkopf. Interventional robustness of deep latent variable models. ICML, 2018.
    Google ScholarLocate open access versionFindings
  • Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and Ying Wu. Learning fine-grained image similarity with deep ranking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1386–1393, 2014.
    Google ScholarLocate open access versionFindings
Full Text
Your rating :
0

 

Tags
Comments