Multigroup Robustness
arxiv(2024)
摘要
To address the shortcomings of real-world datasets, robust learning
algorithms have been designed to overcome arbitrary and indiscriminate data
corruption. However, practical processes of gathering data may lead to patterns
of data corruption that are localized to specific partitions of the training
dataset. Motivated by critical applications where the learned model is deployed
to make predictions about people from a rich collection of overlapping
subpopulations, we initiate the study of multigroup robust algorithms whose
robustness guarantees for each subpopulation only degrade with the amount of
data corruption inside that subpopulation. When the data corruption is not
distributed uniformly over subpopulations, our algorithms provide more
meaningful robustness guarantees than standard guarantees that are oblivious to
how the data corruption and the affected subpopulations are related. Our
techniques establish a new connection between multigroup fairness and
robustness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要