Normative Principles for Evaluating Fairness in Machine Learning.

AIES(2020)

引用 29|浏览5
暂无评分
摘要
There are many incompatible ways to measure fair outcomes for machine learning algorithms. The goal of this paper is to characterize rates of success and error across protected groups (race, gender, sexual orientation) as a distribution problem, and describe the possible solutions to this problem according to different normative principles from moral and political philosophy. These normative principles are based on various competing attributes within a distribution problem: intentions, compensation, desert, consent, and consequences. Each principle will be applied to a sample risk-assessment classifier to demonstrate the philosophical arguments underlying different sets of fairness metrics.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要