Metric-Free Individual Fairness with Cooperative Contextual Bandits

2020 IEEE International Conference on Data Mining (ICDM)(2020)

引用 7|浏览14
暂无评分
摘要
Data mining algorithms are increasingly used in automated decision making across all walks of daily life. Unfortunately, as reported in several studies these algorithms learn bias from data and environment leading to unequitable and unfair solutions. To mitigate bias in machine learning, different formalizations of fairness have been proposed that can be categorized into group fairness and individual fairness. Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group. On the other hand, individual fairness requires that similar individuals be treated similarly. However, individual fairness remains understudied due to its reliance on problem-specific similarity metric. We propose a metric-free individual fairness and a cooperative contextual bandits (CCB) algorithm. The CCB algorithm utilizes fairness as a reward and attempts to maximize it. The advantage of treating fairness as a reward is that the fairness criterion does not need to be differentiable. The proposed algorithm is tested on multiple real-world benchmark datasets. The results show the effectiveness of the proposed algorithm at mitigating bias and at achieving both individual and group fairness.
更多
查看译文
关键词
decision making,contextual bandits,Fair machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要