Rawlsian Fairness for Machine Learning.
arXiv: Learning(2016)
摘要
Motivated by concerns that automated decision-making procedures can unintentionally lead to discriminatory behavior, we study a technical definition of fairness modeled after John Rawlsu0027 notion of equality of opportunity. In the context of a simple model of online decision making, we give an algorithm that satisfies this fairness constraint, while still being able to learn at a rate that is comparable to (but necessarily worse than) that of the best algorithms absent a fairness constraint. We prove a regret bound for algorithms in the linear contextual bandit framework that is a significant improvement over our technical companion paper [16], which gives black-box reductions in a more general setting. We analyze our algorithms both theoretically and experimentally. Finally, we introduce the notion of a discrimination index, and show that standard algorithms for our problem exhibit structured discriminatory behavior, whereas the fair algorithms we develop do not.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络