Generalized Invariant Risk Minimization: relating adaptation and invariant representation learning

semanticscholar(2020)

引用 1|浏览0
暂无评分
摘要
If faced with new domains or environments, a standard strategy is to adapt the parameters of a model trained on one domain such that it performs well on the new domain. Here we introduce Generalized Invariant Risk Minimization (G-IRM), a technique that takes a pre-specified adaptation mechanism and aims to find invariant representations that (a) perform well across multiple different training environments and (b) cannot be improved through adaptation to individual environments. GIRM thereby generalizes ideas put forward by Invariant Risk Minimization (IRM) and allows us to directly compare the performance of invariant representations with adapted representations on an equal footing, i.e., with respect to the same adaptation mechanism. We propose a framework to test the hypotheses that (i) G-IRM outperforms IRM, (ii) G-IRM outperforms Empirical Risk Minimization (ERM) and (iii) that more powerful adaptation mechanisms lead to better G-IRM performance. Such a relationship would provide a novel and systematic way to design regularizers for invariant representation learning and has the potential to scale Invariant Risk Minimization towards real world datasets.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要