Invariant Information Bottleneck for Domain Generalization

arxiv(2021)

引用 13|浏览25
暂无评分
摘要
The main challenge for domain generalization (DG) is to overcome the potential distributional shift between multiple training domains and unseen test domains. One popular class of DG algorithms aims to learn representations that have an invariant causal relation across the training domains. However, certain features, called pseudo-invariant features, may be invariant in the training domain but not the test domain and can substantially decreases the performance of existing algorithms. To address this issue, we propose a novel algorithm, called Invariant Information Bottleneck (IIB), that learns a minimally sufficient representation that is invariant across training and testing domains. By minimizing the mutual information between the representation and inputs, IIB alleviates its reliance on pseudo-invariant features, which is desirable for DG. To verify the effectiveness of the IIB principle, we conduct extensive experiments on large-scale DG benchmarks. The results show that IIB outperforms invariant learning baseline (e.g. IRM) by an average of 2.8% and 3.8% accuracy over two evaluation metrics.
更多
查看译文
关键词
Machine Learning (ML),Computer Vision (CV)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络