Opening the Black Box: The Promise and Limitations of Explainable Machine Learning in Cardiology.

The Canadian journal of cardiology(2021)

引用 123|浏览3
暂无评分
摘要
Many clinicians remain wary of machine learning because of longstanding concerns about "black box" models. "Black box" is shorthand for models that are sufficiently complex that they are not straightforwardly interpretable to humans. Lack of interpretability in predictive models can undermine trust in those models, especially in health care, in which so many decisions are- literally-life and death issues. There has been a recent explosion of research in the field of explainable machine learning aimed at addressing these concerns. The promise of explainable machine learning is considerable, but it is important for cardiologists who may encounter these techniques in clinical decision-support tools or novel research papers to have critical understanding of both their strengths and their limitations. This paper reviews key concepts and techniques in the field of explainable machine learning as they apply to cardiology. Key concepts reviewed include interpretability vs explainability and global vs local explanations. Techniques demonstrated include permutation importance, surrogate decision trees, local interpretable model-agnostic explanations, and partial dependence plots. We discuss several limitations with explainability techniques, focusing on the how the nature of explanations as approximations may omit important information about how black-box models work and why they make certain predictions. We conclude by proposing a rule of thumb about when it is appropriate to use black- box models with explanations rather than interpretable models.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要