Mitigating Nonlinear Algorithmic Bias in Binary Classification
CoRR(2023)
摘要
This paper proposes the use of causal modeling to detect and mitigate
algorithmic bias that is nonlinear in the protected attribute. We provide a
general overview of our approach. We use the German Credit data set, which is
available for download from the UC Irvine Machine Learning Repository, to
develop (1) a prediction model, which is treated as a black box, and (2) a
causal model for bias mitigation. In this paper, we focus on age bias and the
problem of binary classification. We show that the probability of getting
correctly classified as "low risk" is lowest among young people. The
probability increases with age nonlinearly. To incorporate the nonlinearity
into the causal model, we introduce a higher order polynomial term. Based on
the fitted causal model, the de-biased probability estimates are computed,
showing improved fairness with little impact on overall classification
accuracy. Causal modeling is intuitive and, hence, its use can enhance
explicability and promotes trust among different stakeholders of AI.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要