Fair active learning
Expert Systems with Applications(2022)
摘要
Machine learning (ML) is increasingly being used in high-stakes applications impacting society. Therefore, it is of critical importance that ML models do not propagate discrimination. Collecting accurate labeled data in societal applications is challenging and costly. Active learning is a promising approach to build an accurate classifier by interactively querying an oracle within a labeling budget. We introduce the fair active learning framework to carefully select data points to be labeled so as to balance model accuracy and fairness. To incorporate the notion of fairness in the active learning sampling core, it is required to measure the fairness of the model after adding each unlabeled sample. Since their labels are unknown in advance, we propose an expected fairness metric to probabilistically measure the impact of each sample if added for each possible class label. Next, we propose multiple optimizations to balance the trade-off between accuracy and fairness. Our first optimization linearly aggregate the expected fairness with entropy using a control parameter. To avoid erroneous estimation of the expected fairness, we propose a nested approach to maintain the accuracy of the model, limiting the search space to the top bucket of sample points with large entropy. Finally, to ensure the unfairness reduction of the model after labeling, we propose to replicate the points that truly reduce the unfairness after labeling. We demonstrate the effectiveness and efficiency of our proposed algorithms over widely used benchmark datasets using demographic parity and equalized odds notions of fairness.
更多查看译文
关键词
Active learning,Algorithmic fairness,Limited labeled data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络