Explainable artificial intelligence model for mortality risk prediction in the intensive care unit: a derivation and validation study

POSTGRADUATE MEDICAL JOURNAL(2024)

引用 0|浏览1
暂无评分
摘要
Background The lack of transparency is a prevalent issue among the current machine-learning (ML) algorithms utilized for predicting mortality risk. Herein, we aimed to improve transparency by utilizing the latest ML explicable technology, SHapley Additive exPlanation (SHAP), to develop a predictive model for critically ill patients.Methods We extracted data from the Medical Information Mart for Intensive Care IV database, encompassing all intensive care unit admissions. We employed nine different methods to develop the models. The most accurate model, with the highest area under the receiver operating characteristic curve, was selected as the optimal model. Additionally, we used SHAP to explain the workings of the ML model.Results The study included 21 395 critically ill patients, with a median age of 68 years (interquartile range, 56-79 years), and most patients were male (56.9%). The cohort was randomly split into a training set (N = 16 046) and a validation set (N = 5349). Among the nine models developed, the Random Forest model had the highest accuracy (87.62%) and the best area under the receiver operating characteristic curve value (0.89). The SHAP summary analysis showed that Glasgow Coma Scale, urine output, and blood urea nitrogen were the top three risk factors for outcome prediction. Furthermore, SHAP dependency analysis and SHAP force analysis were used to interpret the Random Forest model at the factor level and individual level, respectively.Conclusion A transparent ML model for predicting outcomes in critically ill patients using SHAP methodology is feasible and effective. SHAP values significantly improve the explainability of ML models. Key messages What is already known on this topic? The lack of transparency is a prevalent issue among the current machine-learning (ML) algorithms utilized for predicting mortality risk. What this study adds? We successfully developed a high-performance ML model for the prediction of in-hospital mortality in critically ill patients (accuracy: 87.62%; area under the receiver operating characteristic curve: 0.89), and utilized a latest ML interpretable technology called SHapley Additive exPlanation, to increase model transparency. How this study might affect research, practice, or policy? Our findings could assist healthcare providers in identifying high-risk factors contributing to mortality in daily clinical practice. This, in turn, could help optimize the alignment of care goals and improve healthcare resource utilization.
更多
查看译文
关键词
explainable artificial intelligence,machine learning,critical illness,mortality
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要