谷歌浏览器插件
订阅小程序
在清言上使用

Using Explainable Machine Learning to Explore the Impact of Synoptic Reporting on Prostate Cancer

Femke M. Janssen,Katja K. H. Aben,Berdine L. Heesterman, Quirinus J. M. Voorham, Paul A. Seegers,Arturo Moncada-Torres

Algorithms(2022)

引用 4|浏览7
暂无评分
摘要
Machine learning (ML) models have proven to be an attractive alternative to traditional statistical methods in oncology. However, they are often regarded as black boxes, hindering their adoption for answering real-life clinical questions. In this paper, we show a practical application of explainable machine learning (XML). Specifically, we explored the effect that synoptic reporting (SR; i.e., reports where data elements are presented as discrete data items) in Pathology has on the survival of a population of 14,878 Dutch prostate cancer patients. We compared the performance of a Cox Proportional Hazards model (CPH) against that of an eXtreme Gradient Boosting model (XGB) in predicting patient ranked survival. We found that the XGB model (c-index = 0.67) performed significantly better than the CPH (c-index = 0.58). Moreover, we used Shapley Additive Explanations (SHAP) values to generate a quantitative mathematical representation of how features—including usage of SR—contributed to the models’ output. The XGB model in combination with SHAP visualizations revealed interesting interaction effects between SR and the rest of the most important features. These results hint that SR has a moderate positive impact on predicted patient survival. Moreover, adding an explainability layer to predictive ML models can open their black box, making them more accessible and easier to understand by the user. This can make XML-based techniques appealing alternatives to the classical methods used in oncological research and in health care in general.
更多
查看译文
关键词
Cox Proportional Hazards (CPH),explainable AI,eXtreme Gradient Boosting (XGB),interpretability,oncology,prostatectomy,ranked survival,SHAP
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要