Biased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI Ethics
EC '20: The 21st ACM Conference on Economics and Computation Virtual Event Hungary July, 2020(2020)
摘要
Why do biased algorithmic predictions arise, and what interventions can prevent them? We examine this topic with a field experiment about using machine learning to predict human capital. We randomly assign approximately 400 AI engineers to develop software under different experimental conditions to predict standardized test scores of OECD residents. We then assess the resulting predictive algorithms using the realized test performances, and through randomized audit-like manipulations of algorithmic inputs. We also used the diversity of our subject population to measure whether demographically non-traditional engineers were more likely to notice and reduce algorithmic bias, and whether algorithmic prediction errors are correlated within programmer demographic groups. This document describes our experimental design and motivation; the full results of our experiment are available at https://ssrn.com/abstract=3615404.
更多查看译文
关键词
biased programmers,ethics,biased data
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络