A Framework For Benchmarking Discrimination-Aware Models In Machine Learning
AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY(2019)
摘要
Discrimination-aware models in machine learning are a recent topic of study that aim to minimize the adverse impact of machine learning decisions for certain groups of people due to ethical and legal implications. We propose a benchmark framework for assessing discrimination-aware models. Our framework consists of systematically generated biased datasets that are similar to real world data, created by a Bayesian network approach. Experimental results show that we can assess the quality of techniques through known metrics of discrimination, and our flexible framework can be extended to most real datasets and fairness measures to support a diversity of assessments.
更多查看译文
关键词
fairness-aware data mining, bayesian networks, discrimination-aware benchmarks, disparate impact, disparate mistreatment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络