Robustness to Subpopulation Shift with Domain Label Noise via Regularized Annotation of Domains
CoRR(2024)
摘要
Existing methods for last layer retraining that aim to optimize worst-group
accuracy (WGA) rely heavily on well-annotated groups in the training data. We
show, both in theory and practice, that annotation-based data augmentations
using either downsampling or upweighting for WGA are susceptible to domain
annotation noise, and in high-noise regimes approach the WGA of a model trained
with vanilla empirical risk minimization. We introduce Regularized Annotation
of Domains (RAD) in order to train robust last layer classifiers without the
need for explicit domain annotations. Our results show that RAD is competitive
with other recently proposed domain annotation-free techniques. Most
importantly, RAD outperforms state-of-the-art annotation-reliant methods even
with only 5
datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要