Explaining deep learning for ECG analysis: Building blocks for auditing and knowledge discovery

Computers in Biology and Medicine(2024)

引用 0|浏览12
暂无评分
摘要
Deep neural networks have become increasingly popular for analyzing ECG data because of their ability to accurately identify cardiac conditions and hidden clinical factors. However, the lack of transparency due to the black box nature of these models is a common concern. To address this issue, explainable AI (XAI) methods can be employed. In this study, we present a comprehensive analysis of post-hoc XAI methods, investigating the glocal (aggregated local attributions over multiple samples) and global (concept based XAI) perspectives. We have established a set of sanity checks to identify saliency as the most sensible attribution method. We provide a dataset-wide analysis across entire patient subgroups, which goes beyond anecdotal evidence, to establish the first quantitative evidence for the alignment of model behavior with cardiologists’ decision rules. Furthermore, we demonstrate how these XAI techniques can be utilized for knowledge discovery, such as identifying subtypes of myocardial infarction. We believe that these proposed methods can serve as building blocks for a complementary assessment of the internal validity during a certification process, as well as for knowledge discovery in the field of ECG analysis.
更多
查看译文
关键词
Deep neural networks,Electrocardiography,Explainable AI (XAI),Knowledge discovery,Post-hoc XAI methods,Time series analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要