AI帮你理解科学

AI 生成解读视频

AI抽取解析论文重点内容自动生成视频


pub
生成解读视频

AI 溯源

AI解析本论文相关学术脉络


Master Reading Tree
生成 溯源树

AI 精读

AI抽取本论文的概要总结


微博一下
Our findings show that the same fairness constraint can have opposite impact depending on the underlying problem scenarios, which highlights the importance of understanding real-world dynamics in decision making systems

How do fair decisions fare in long-term qualification?

NIPS 2020, (2020)

被引用3|浏览39
EI
下载 PDF 全文
引用
微博一下

摘要

Although many fairness criteria have been proposed for decision making, their long-term impact on the well-being of a population remains unclear. In this work, we study the dynamics of population qualification and algorithmic decisions under a partially observed Markov decision problem setting. By characterizing the equilibrium of such ...更多
0
简介
  • Automated decision making systems trained with real-world data can have inherent bias and exhibit discrimination against disadvantaged groups.
  • Recent studies have shown that imposing static fairness criteria intended to protect disadvantaged groups can lead to pernicious long-term effects [33, 47]
  • These long-term effects are heavily shaped by the interplay between algorithmic decisions and individuals’ reactions [34]: algorithmic decisions lead to changes in the underlying feature distribution, which feeds back into the decision making process.
  • Understanding how this type of coupled dynamics evolve is a major challenge [10]
重点内容
  • Automated decision making systems trained with real-world data can have inherent bias and exhibit discrimination against disadvantaged groups
  • Recent studies have shown that imposing static fairness criteria intended to protect disadvantaged groups can lead to pernicious long-term effects [33, 47]. These long-term effects are heavily shaped by the interplay between algorithmic decisions and individuals’ reactions [34]: algorithmic decisions lead to changes in the underlying feature distribution, which feeds back into the decision making process
  • We studied the long-term impact of fairness constraints (e.g., Demographic Parity (DP) and Equality of Opportunity (EqOpt)) on group qualification rates
  • Our findings show that the same fairness constraint can have opposite impact depending on the underlying problem scenarios, which highlights the importance of understanding real-world dynamics in decision making systems
  • Our analysis has focused on scenarios with a unique equilibrium; scenarios with multiple equilibria or oscillating states remain an interesting direction of future research
  • By conducting an equilibrium analysis and evaluating the long-term impact of different fairness criteria, our results provide a theoretical foundation that can help answer questions such as whether/when imposing short-term fairness constraints are effective in promoting long-term equality
方法
  • The authors conducted experiments on both Gaussian synthetic datasets and real-world datasets.
  • The authors present synthetic data experiments in Appendix B and the results using realworld datasets here
  • These are static, one-shot datasets, which the authors use to create a simulated dynamic setting as detailed below.
  • The authors use the FICO score dataset [42] to study the long-term impact of fairness (a) D-invariant transitions (b) D-variant transitions constraints EqOpt and DP and other interventions on loan repayment rates in the Caucasian group GC and the African American group GAA.
  • This process proceeds and qualification rates in both groups change over time
结论
  • The authors studied the long-term impact of fairness constraints (e.g., DP and EqOpt) on group qualification rates.
  • By casting the problem in a POMDP framework, the authors conducted equilibrium analysis.
  • The authors' findings show that the same fairness constraint can have opposite impact depending on the underlying problem scenarios, which highlights the importance of understanding real-world dynamics in decision making systems.
  • The authors' experiments on real-world static datasets with simulated dynamics show that the framework can be used to facilitate social science studies.
  • The authors' analysis has focused on scenarios with a unique equilibrium; scenarios with multiple equilibria or oscillating states remain an interesting direction of future research
表格
  • Table1: Table 1
  • Table2: αaC − αbC when C = UN, EqOpt, DP: Gay(x) = Gby(x) and Tyad = Tybd
  • Table3: αaC − αbC when C = UN, EqOpt, DP: Gay(x) = Gby(x) and Tyad = Tybd under Condition 1(B)
  • Table4: Recidivism rates in the long run. UN∗: unconstrained policy (UN) with the optimal threshold
Download tables as Excel
相关工作
  • Among existing works on fairness in sequential decision making problems [45], many assume that the population’s feature distribution neither changes over time nor is it affected by decisions; examples include studies on handling bias in online learning [6, 11,12,13, 16, 20, 28, 31] and bandits problems [4, 8, 26, 27, 32, 35, 39, 43]. The goal of most of these work is to design algorithms that can learn near-optimal policy quickly from the sequentially arrived data and the partially observed information, and understand the impact of imposing fairness intervention on the learned policy (e.g., total utility, learning rate, sample complexity, etc.)

    However, recent studies [2, 7, 15] have shown that there exists a complex interplay between algorithmic decisions and individuals, e.g., user participation dynamics [19, 46, 47], strategic reasoning in a game [23, 30], etc., such that decision making directly leads to changes in the underlying feature distribution, which then feeds back into the decision making process. Many studies thus aim at understanding the impacts of imposing fairness constraints when decisions affect underlying feature distribution. For example, [33, 21, 29, 30] construct two-stage models where only the one-step impacts of fairness intervention on the underlying population are examined but not the long-term impacts in a sequential framework; [24, 38] focus on the fairness in reinforcement learning, of which the goal is to learn a long-run optimal policy that maximizes the cumulative rewards subject to certain fairness constraint; [19, 47] construct a user participation dynamics model where individuals respond to perceived decisions by leaving the system uniformly at random. The goal is to understand the impact of various fairness interventions on group representation.

    Our work is most relevant to [23, 34, 37, 44], which study the long-term impacts of decisions on the groups’ qualification states with different dynamics. In [23, 34], strategic individuals are assumed to be able to observe the current policy, based on which they can manipulate the qualification states strategically to receive better decisions. However, there is a lack of study on the influence of the sensitive attribute on dynamics and impact of fairness constraints. Besides, in many cases, the qualification states are affected by both the policy and the qualifications at the previous time step, which is considered in [37, 44]. However, they assume that the decision maker have access to qualification states and the dynamics of the qualification rates is the same in different groups, i.e.,the equally qualified people from different groups after perceiving the same decision will have the same future qualification state. In fact, the qualification states are unobservable in most cases, and the dynamics can vary across different groups. If considering such difference, the dynamics can be much more complicated such that the social equality can not be attained easily as concluded in [37, 44].
基金
  • Liu have been supported by the NSF under grants CNS-1616575, CNS1646019, CNS-1739517, IIS-2007951, and by the ARO under contract W911NF1810208
  • Tu would like to acknowledge the funding support of the Swedish e-Science Research Centre and the material suggestion regarding the social impact of polices given by Yating Zhang
  • Zhang would like to acknowledge the support by the United States Air Force under Contract No FA8650-17-C-7715
研究对象与分析
pairs: 3
This theorem shows that imposing fairness only helps when the “leg-up” effect is more prominent than the “lack of motivation” effect; alternatively, this suggests that when the “lack of motivation” effect is dominant, imposing fairness should be accompanied by other support structure to dampen this effect (e.g., by helping those accepted to become or remain qualified). Theorem 4 is illustrated in the plot to the right, where transitions satisfy Condition 1(A)(B) and Gay(x) = Gby(x) is Gaussian distributed.Each plot includes 3 pairs of red/blue dashed curves corresponding to 3 policies (EqOpt, DP, UN). Points (αa, αb) on these curves satisfy αb = g0b(αa, αb)·(1 − αb) + g1b(αa, αb)·αb and αa = g0a(αa, αb)·(1 − αa) + g1a(αa, αb)·αa, respectively

引用论文
  • Target corporation to pay $2.8 million to resolve eeoc discrimination finding. In U.S. Equal Employment Opportunity Commission, 2015. https://www.eeoc.gov/newsroom/target-corporation-pay-28-million-resolve-eeoc-discrimination-
    Locate open access versionFindings
  • A. P. Aneja and C. F. Avenancio-León. No credit for time served? incarceration and creditdriven crime cycles. 2019.
    Google ScholarFindings
  • J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, 23, 2016.
    Google ScholarLocate open access versionFindings
  • P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002.
    Google ScholarLocate open access versionFindings
  • S. Barocas, M. Hardt, and A. Narayanan. Fairness and Machine Learning. fairmlbook.org, 2019. http://www.fairmlbook.org.
    Locate open access versionFindings
  • Y. Bechavod, K. Ligett, A. Roth, B. Waggoner, and S. Z. Wu. Equal opportunity in online classification with partial feedback. In Advances in Neural Information Processing Systems 32, pages 8972–8982. 2019.
    Google ScholarLocate open access versionFindings
  • A. J. Chaney, B. M. Stewart, and B. E. Engelhardt. How algorithmic confounding in recommendation systems increases homogeneity and decreases utility. In Proceedings of the 12th ACM Conference on Recommender Systems, pages 224–232. ACM, 2018.
    Google ScholarLocate open access versionFindings
  • Y. Chen, A. Cuellar, H. Luo, J. Modi, H. Nemlekar, and S. Nikolaidis. Fair contextual multiarmed bandits: Theory and experiments. arXiv preprint arXiv:1912.08055, 2019.
    Findings
  • S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining, pages 797–806, 2017.
    Google ScholarLocate open access versionFindings
  • A. D’Amour, H. Srinivasan, J. Atwood, P. Baljekar, D. Sculley, and Y. Halpern. Fairness is not static: deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 525–534, 2020.
    Google ScholarLocate open access versionFindings
  • C. Dimitrakakis, Y. Liu, D. Parkes, and G. Radanovic. Bayesian fairness. In AAAI, 2019.
    Google ScholarLocate open access versionFindings
  • D. Ensign, S. A. Friedler, S. Neville, C. Scheidegger, and S. Venkatasubramanian. Runaway feedback loops in predictive policing. In Conference of Fairness, Accountability, and Transparency, 2018.
    Google ScholarLocate open access versionFindings
  • D. Ensign, F. Sorelle, N. Scott, S. Carlos, and V. Suresh. Decision making with limited feedback. In Algorithmic Learning Theory, pages 359–367, 2018.
    Google ScholarLocate open access versionFindings
  • E. Fehr, L. Goette, and C. Zehnder. A behavioral account of the labor market: The role of fairness concerns. Annu. Rev. Econ., 1(1):355–384, 2009.
    Google ScholarLocate open access versionFindings
  • A. Fuster, P. Goldsmith-Pinkham, T. Ramadorai, and A. Walther. Predictably unequal? the effects of machine learning on credit markets. The Effects of Machine Learning on Credit Markets, 2018.
    Google ScholarLocate open access versionFindings
  • S. Gillen, C. Jung, M. Kearns, and A. Roth. Online learning with an unknown fairness metric. In Advances in Neural Information Processing Systems, pages 2600–2609, 2018.
    Google ScholarLocate open access versionFindings
  • K. F. Gotham. Race, mortgage lending and loan rejections in a us city. Sociological Focus, 31 (4):391–405, 1998.
    Google ScholarFindings
  • M. Hardt, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315–3323, 2016.
    Google ScholarLocate open access versionFindings
  • T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang. Fairness without demographics in repeated loss minimization. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1929–1938. PMLR, 2018.
    Google ScholarLocate open access versionFindings
  • H. Heidari and A. Krause. Preventing disparate treatment in sequential decision making. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 2248– 2254, 2018.
    Google ScholarLocate open access versionFindings
  • H. Heidari, V. Nanda, and K. Gummadi. On the long-term impact of algorithmic decision policies: Effort unfairness and feature segregation through social learning. In International Conference on Machine Learning, pages 2692–2701, 2019.
    Google ScholarLocate open access versionFindings
  • T. Homonoff, R. O’Brien, and A. B. Sussman. Does knowing your fico score change financial behavior? evidence from a field experiment with student loan borrowers. Review of Economics and Statistics, pages 1–45.
    Google ScholarLocate open access versionFindings
  • L. Hu, N. Immorlica, and J. W. Vaughan. The disparate effects of strategic manipulation. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 259–268, 2019.
    Google ScholarLocate open access versionFindings
  • S. Jabbari, M. Joseph, M. Kearns, J. Morgenstern, and A. Roth. Fairness in reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1617–1626. JMLR. org, 2017.
    Google ScholarLocate open access versionFindings
  • I. James. Estimation of the mixing proportion in a mixture of two normal distributions from simple, rapid measurements. Biometrics, pages 265–275, 1978.
    Google ScholarLocate open access versionFindings
  • M. Joseph, M. Kearns, J. H. Morgenstern, and A. Roth. Fairness in learning: Classic and contextual bandits. In Advances in Neural Information Processing Systems, pages 325–333, 2016.
    Google ScholarLocate open access versionFindings
  • M. Joseph, M. Kearns, J. Morgenstern, S. Neel, and A. Roth. Meritocratic fairness for infinite and contextual bandits. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 158–163. ACM, 2018.
    Google ScholarLocate open access versionFindings
  • N. Kallus and A. Zhou. Residual unfairness in fair machine learning from prejudiced data. In Proceedings of the 35th International Conference on Machine Learning, 2018.
    Google ScholarLocate open access versionFindings
  • S. Kannan, A. Roth, and J. Ziani. Downstream effects of affirmative action. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 240–248. ACM, 2019.
    Google ScholarLocate open access versionFindings
  • M. Khajehnejad, B. Tabibian, B. Schölkopf, A. Singla, and M. Gomez-Rodriguez. Optimal decision making under strategic behavior. arXiv preprint arXiv:1905.09239, 2019.
    Findings
  • N. Kilbertus, M. Gomez-Rodriguez, B. Schölkopf, K. Muandet, and I. Valera. Improving consequential decision making under imperfect predictions. arXiv preprint arXiv:1902.02979, 2019.
    Findings
  • F. Li, J. Liu, and B. Ji. Combinatorial sleeping bandits with fairness constraints. In IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pages 1702–1710. IEEE, 2019.
    Google ScholarLocate open access versionFindings
  • L. T. Liu, S. Dean, E. Rolf, M. Simchowitz, and M. Hardt. Delayed impact of fair machine learning. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 3150– 3158. PMLR, 2018.
    Google ScholarLocate open access versionFindings
  • L. T. Liu, A. Wilson, N. Haghtalab, A. T. Kalai, C. Borgs, and J. Chayes. The disparate equilibria of algorithmic decision making when individuals invest rationally. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 381–391, 2020.
    Google ScholarLocate open access versionFindings
  • Y. Liu, G. Radanovic, C. Dimitrakakis, D. Mandal, and D. C. Parkes. Calibrated fairness in bandits. arXiv preprint arXiv:1707.01875, 2017.
    Findings
  • C. A. Mallett. Disproportionate minority contact in juvenile justice: Today’s, and yesterdays, problems. Criminal justice studies, 31(3):230–248, 2018.
    Google ScholarLocate open access versionFindings
  • H. Mouzannar, M. I. Ohannessian, and N. Srebro. From fair decision making to social equality. In Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* ’19, page 359–368, 2019.
    Google ScholarLocate open access versionFindings
  • R. Nabi, D. Malinsky, and I. Shpitser. Learning optimal fair policies. Proceedings of machine learning research, 97:4674, 2019.
    Google ScholarLocate open access versionFindings
  • V. Patil, G. Ghalme, V. Nair, and Y. Narahari. Achieving fairness in the stochastic multi-armed bandit problem. arXiv preprint arXiv:1907.10516, 2019.
    Findings
  • R. K. Patra and B. Sen. Estimation of a two-component mixture model with applications to multiple testing. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 78(4):869–893, 2016.
    Google ScholarLocate open access versionFindings
  • J. Paxton, D. Graham, and C. Thraen. Modeling group loan repayment behavior: New insights from burkina faso. Economic Development and cultural change, 48(3):639–655, 2000.
    Google ScholarLocate open access versionFindings
  • U. F. Reserve. Report to the congress on credit scoring and its effects on the availability and affordability of credit. Board of Governors of the Federal Reserve System, 2007.
    Google ScholarFindings
  • W. Tang, C.-J. Ho, and Y. Liu. Fair bandit learning with delayed impact of actions. arXiv preprint arXiv:2002.10316, 2020.
    Findings
  • J. Williams and J. Z. Kolter. Dynamic modeling and equilibria in fair decision making. arXiv preprint arXiv:1911.06837, 2019.
    Findings
  • X. Zhang and M. Liu. Fairness in learning-based sequential decision algorithms: A survey. arXiv preprint arXiv:2001.04861, 2020.
    Findings
  • X. Zhang, M. M. Khalili, and M. Liu. Long-term impacts of fair machine learning. Ergonomics in Design, 2019. doi: 10.1177/1064804619884160.
    Locate open access versionFindings
  • X. Zhang, M. M. Khalili, C. Tekin, and M. Liu. Group retention when using machine learning in sequential decision making: the interplay between user dynamics and fairness. In Advances in Neural Information Processing Systems, pages 15243–15252, 2019.
    Google ScholarLocate open access versionFindings
  • 0. The proof of Theorem 1.
    Google ScholarFindings
  • 1. Therefore, C1, C2 have only one intersection, the equilibrium (αa, αb) is unique.
    Google ScholarFindings
  • 1. Specifically, under
    Google ScholarFindings
您的评分 :
0

 

标签
评论
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科