Framework for Integrating Equity Into Machine Learning Models: A Case Study

Chest(2022)

引用 8|浏览9
暂无评分
摘要
Predictive analytic models leveraging machine learning methods increasingly have become vital to health care organizations hoping to improve clinical outcomes and the efficiency of care delivery for all patients. Unfortunately, predictive models could harm populations that have experienced interpersonal, institutional, and structural biases. Models learn from historically collected data that could be biased. In addition, bias impacts a model's development, application, and interpretation. We present a strategy to evaluate for and mitigate biases in machine learning models that potentially could create harm. We recommend analyzing for disparities between less and more socially advantaged populations across model performancemetrics (eg, accuracy, positive predictive value), patient outcomes, and resource allocation and then identify root causes of the disparities (eg, biased data, interpretation) and brainstorm solutions to address the disparities. This strategy follows the lifecycle of machine learning models in health care, namely, identifying the clinical problem, model design, data collection, model training, model validation, model deployment, and monitoring after deployment. To illustrate this approach, we use a hypothetical case of a health system developing and deploying a machine learning model to predict the risk of mortality in 6 months for patients admitted to the hospital to target a hospital's delivery of palliative care services to those with the highest mortality risk. The core ethical concepts of equity and transparency guide our proposed framework to help ensure the safe and effective use of predictive algorithms in health care to help everyone achieve their best possible health.
更多
查看译文
关键词
bias,disparities,equity,framework,machine learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要