Generalized Invariant Risk Minimization: relating adaptation and invariant representation learning

semanticscholar(2020)

Cited 1|Views1
No score
Abstract
If faced with new domains or environments, a standard strategy is to adapt the parameters of a model trained on one domain such that it performs well on the new domain. Here we introduce Generalized Invariant Risk Minimization (G-IRM), a technique that takes a pre-specified adaptation mechanism and aims to find invariant representations that (a) perform well across multiple different training environments and (b) cannot be improved through adaptation to individual environments. GIRM thereby generalizes ideas put forward by Invariant Risk Minimization (IRM) and allows us to directly compare the performance of invariant representations with adapted representations on an equal footing, i.e., with respect to the same adaptation mechanism. We propose a framework to test the hypotheses that (i) G-IRM outperforms IRM, (ii) G-IRM outperforms Empirical Risk Minimization (ERM) and (iii) that more powerful adaptation mechanisms lead to better G-IRM performance. Such a relationship would provide a novel and systematic way to design regularizers for invariant representation learning and has the potential to scale Invariant Risk Minimization towards real world datasets.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined