Differential Privacy for Fair Deep Learning Models
2021 IEEE International Systems Conference (SysCon)(2021)
摘要
Increasingly, we rely on deep learning to make decisions. Yet, these models may take unfair decisions due to bias in the training data-sets. Most of the time, this bias is discrimination due to sensitive attributes such as gender, race, or ethnicity. In fact, discrimination and inequities are a daily reality that continues to exist in many fields, e.g. hiring processes, tenures, and workplaces. Therefore, exiting data-sets may be biased which results in getting unfair learning models. In this paper, we empirically prove that learning models trained on biased datasets will produce unfair and discriminating models. To handle this problem, we propose a pre-processing approach that takes advantage of differential privacy properties to mitigate bias in sensitive attributes of data-sets. More precisely, in the learning process, we introduce the randomized response mechanism to mitigate the inequity and avoid discrimination from the training data-set. We evaluate our approach in a hiring process using a synthetic data-set of resumes of candidates. Simulation results show that our approach mitigates bias and takes more fair decisions compared to the TensorFlow differential privacy library or a learning model without our pre-processing approach. This is owing to the fact that the TensorFlow library applies differential privacy to all attributes, Whilst in our approach we only apply the differential privacy mechanism to sensitive attributes that are the source of the discrimination.
更多查看译文
关键词
Differential Privacy,Fair decision,Discrimination,Bias,Sensitive attribute,TensorFlow
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要