A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Cardiac Analysis

Qijia Huang,Huanrui Yang, Eric Zeng,Yiran Chen

crossref(2022)

引用 0|浏览0
暂无评分
摘要
The need for telehealth and home-based monitoring surges during the COVID-19 pandemic. Based on the recent advancement of concurrent electrocardiograph (ECG) and phonocardiogram (PCG) wearable sensors, this paper proposes a novel framework for synchronized ECG and PCG signal analysis for cardiac function monitoring. Our system jointly performs R-peak detection on ECG, fundamental heart sounds segmentation of PCG, and cardiac condition classification. First, we propose the use of recurrent neural networks and developed a new type of labeling method for R-peak detection algorithm. The new labeling strategy utilizes a regression objective to resolve the previous imbalanced classification problem. Second, we propose a 1D U-Net structure for PCG segmentation within a single heartbeat length. We further utilize the multi-modality of signals and contrastive learning to enhance model performance. Finally, we extract 20 features from our signal labeling algorithms to apply to two real-world problems: snore detection during sleep and COVID-19 detection. The proposed method achieves state-of-the-art performance on multiple benchmarks using two public datasets: MIT-BIH and PhysioNet 2016. The proposed method provides a cost-effective alternative to labor-intensive manual segmentation, with more accurate segmentation than existing methods. On the dataset collected by Bayland Scientific which includes synchronized ECG and PCG signals, the proposed system achieves an end-to-end R-peak detection with F1 score of 99.84%, heart sound segmentation with F1 score of 91.25%, and snore and COVID-19 detection with accuracy of 96.30% and 95.06% respectively.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要