On the Alignment of Group Fairness with Attribute Privacy
CoRR(2022)
摘要
Group fairness and privacy are fundamental aspects in designing trustworthy
machine learning models. Previous research has highlighted conflicts between
group fairness and different privacy notions. We are the first to demonstrate
the alignment of group fairness with the specific privacy notion of attribute
privacy in a blackbox setting. Attribute privacy, quantified by the resistance
to attribute inference attacks (AIAs), requires indistinguishability in the
target model's output predictions. Group fairness guarantees this thereby
mitigating AIAs and achieving attribute privacy. To demonstrate this, we first
introduce AdaptAIA, an enhancement of existing AIAs, tailored for real-world
datasets with class imbalances in sensitive attributes. Through theoretical and
extensive empirical analyses, we demonstrate the efficacy of two standard group
fairness algorithms (i.e., adversarial debiasing and exponentiated gradient
descent) against AdaptAIA. Additionally, since using group fairness results in
attribute privacy, it acts as a defense against AIAs, which is currently
lacking. Overall, we show that group fairness aligns with attribute privacy at
no additional cost other than the already existing trade-off with model
utility.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要