Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network

Nature medicine, Volume 25, Issue 1, 2019, Pages 65-69.

Cited by: 3|Bibtex|Views135|DOI:https://doi.org/10.1038/s41591-018-0268-3
WOS
Other Links: pubmed.ncbi.nlm.nih.gov|academic.microsoft.com
Weibo:
We validated the deep neural network on a test dataset that consisted of 328 ECG records collected from 328 unique patients, and which was annotated by a consensus committee of expert cardiologists

Abstract:

Computerized electrocardiogram (ECG) interpretation plays a critical role in the clinical ECG workflow. Widely available digital ECG data and the algorithmic paradigm of deep learning present an opportunity to substantially improve the accuracy and scalability of automated ECG analysis. However, a comprehensive evaluation of an end-to-end...More

Code:

Data:

0
Introduction
  • The electrocardiogram (ECG) is a fundamental tool in the everyday practice of clinical medicine, with more than 300 million ECGs obtained annually worldwide[3].
  • The authors developed a deep neural network to detect 12 rhythm classes from raw single-lead ECG inputs using a training dataset comprised of 91,232 ECG records from 53,549 patients.
Highlights
  • The electrocardiogram (ECG) is a fundamental tool in the everyday practice of clinical medicine, with more than 300 million ECGs obtained annually worldwide[3]
  • We developed a deep neural network to detect 12 rhythm classes from raw single-lead ECG inputs using a training dataset comprised of 91,232 ECG records from 53,549 patients
  • We validated the deep neural network (DNN) on a test dataset that consisted of 328 ECG records collected from 328 unique patients, and which was annotated by a consensus committee of expert cardiologists
  • Our study demonstrates that the paradigm shift represented by end-to-end deep learning may enable a new approach to automated ECG analysis
  • In order to compare the relative performance of the DNN to the cardiologist committee labels, we calculated the F1 score which is the harmonic mean of the positive predictive value and sensitivity
  • Paired with properly annotated digital ECG data, our approach has the potential to increase the overall accuracy of preliminary computerized ECG interpretations and can be used to customize predictions to institution-specific or population-specific applications by additional training on institution-specific data
  • We used confusion matrices to illustrate the specific examples of rhythm classes where the DNN prediction or the individual cardiologist predictions were discordant with the committee consensus at the sequence-level
Results
  • The authors validated the DNN on a test dataset that consisted of 328 ECG records collected from 328 unique patients, and which was annotated by a consensus committee of expert cardiologists.
  • Using the committee labels as the gold-standard, the authors compared the DNN algorithm F1 score to the average individual cardiologist F1 score—which is the harmonic mean of the positive predictive value and sensitivity (Table 1b).
  • The authors emphasize the use in this study of a dataset large enough to evaluate an end-to-end deep learning approach to predict multiple diagnostic ECG classes, and the validation against the high standard of a cardiologist consensus-committee.
  • End-to-end DNN approaches have been used more recently showing good performance for a limited set of ECG rhythms such as AF22,23,36, ventricular arrhythmias[21], or individual heartbeat classes[20,21,37,38].
  • The authors' input dataset is limited to single-lead ECG records obtained from an ambulatory monitor, which provides limited signal compared to a standard 12-lead ECG; it remains to be determined if the algorithm performance would be similar in 12-lead ECGs. it may be in applications such as this, which have lower signal-to-noise ratio and where the current standard of care leaves more room for improvement, that approaches such as deep-learning may provide the greatest impact.
  • Given the resource intensive nature of cardiologist committee ECG annotation, the test dataset was limited to records from 328 patients; confidence intervals with the test dataset size were acceptably narrow, as the authors report in Table 1a, though the ability to perform subgroup analysis is limited.
  • The authors demonstrate that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists.
  • Evaluation at the set level is a useful abstraction, approximating how the DNN algorithm might be applied to a single ECG record to identify which diagnoses are present in a given record.
Conclusion
  • In order to compare the relative performance of the DNN to the cardiologist committee labels, the authors calculated the F1 score which is the harmonic mean of the positive predictive value and sensitivity.
  • The authors used confusion matrices to illustrate the specific examples of rhythm classes where the DNN prediction or the individual cardiologist predictions were discordant with the committee consensus at the sequence-level.
Summary
  • The electrocardiogram (ECG) is a fundamental tool in the everyday practice of clinical medicine, with more than 300 million ECGs obtained annually worldwide[3].
  • The authors developed a deep neural network to detect 12 rhythm classes from raw single-lead ECG inputs using a training dataset comprised of 91,232 ECG records from 53,549 patients.
  • The authors validated the DNN on a test dataset that consisted of 328 ECG records collected from 328 unique patients, and which was annotated by a consensus committee of expert cardiologists.
  • Using the committee labels as the gold-standard, the authors compared the DNN algorithm F1 score to the average individual cardiologist F1 score—which is the harmonic mean of the positive predictive value and sensitivity (Table 1b).
  • The authors emphasize the use in this study of a dataset large enough to evaluate an end-to-end deep learning approach to predict multiple diagnostic ECG classes, and the validation against the high standard of a cardiologist consensus-committee.
  • End-to-end DNN approaches have been used more recently showing good performance for a limited set of ECG rhythms such as AF22,23,36, ventricular arrhythmias[21], or individual heartbeat classes[20,21,37,38].
  • The authors' input dataset is limited to single-lead ECG records obtained from an ambulatory monitor, which provides limited signal compared to a standard 12-lead ECG; it remains to be determined if the algorithm performance would be similar in 12-lead ECGs. it may be in applications such as this, which have lower signal-to-noise ratio and where the current standard of care leaves more room for improvement, that approaches such as deep-learning may provide the greatest impact.
  • Given the resource intensive nature of cardiologist committee ECG annotation, the test dataset was limited to records from 328 patients; confidence intervals with the test dataset size were acceptably narrow, as the authors report in Table 1a, though the ability to perform subgroup analysis is limited.
  • The authors demonstrate that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists.
  • Evaluation at the set level is a useful abstraction, approximating how the DNN algorithm might be applied to a single ECG record to identify which diagnoses are present in a given record.
  • In order to compare the relative performance of the DNN to the cardiologist committee labels, the authors calculated the F1 score which is the harmonic mean of the positive predictive value and sensitivity.
  • The authors used confusion matrices to illustrate the specific examples of rhythm classes where the DNN prediction or the individual cardiologist predictions were discordant with the committee consensus at the sequence-level.
Tables
  • Table1: Diagnostic performance of the deep neural network and averaged individual cardiologists compared to the cardiologist committee consensus (n=328). a, Deep Neural Network algorithm area under the receiver operating characteristic curve (AUC) compared to the cardiologist committee consensus. b, Deep Neural Network algorithm and averaged individual cardiologist F1 scores compared to the cardiologist committee consensus
  • Table2: Deep Neural Network algorithm and cardiologist sensitivity compared to the cardiologist committee consensus, with specificity fixed at the average specificity level achieved by cardiologists
Download tables as Excel
Funding
  • iRhythm Technologies Inc. provided financial support for the data annotation in this work
  • Awni Hannun was funded by an NVIDIA fellowship. Role of the Funding Source The only financial support provided by iRhythm Technologies Inc. for this study was for the data annotation
Study subjects and analysis
patients: 53877
However, a comprehensive evaluation of an end-to-end deep learning approach for ECG analysis across a wide variety of diagnostic classes has not been previously reported. Here, we develop a deep neural network (DNN) to classify 12 rhythm classes using 91,232 single-lead ECGs from 53,877 patients who used a single-lead ambulatory ECG monitoring device. When validated against an independent test dataset annotated by a consensus committee of boardcertified practicing cardiologists, the DNN achieved an average area under the receiver operating

ECG records: 91232
In this study, we constructed a large, novel ECG dataset which underwent expert annotation for a broad range of ECG rhythm classes. We developed a deep neural network to detect 12 rhythm classes from raw single-lead ECG inputs using a training dataset comprised of 91,232 ECG records from 53,549 patients. The DNN was designed to classify 10 arrhythmias as well as sinus rhythm and noise for a total of 12 output rhythm classes (Extended Data Figure 1)

ECG records: 328
Mean age was 69+/−16 years and 43% were female. We validated the DNN on a test dataset that consisted of 328 ECG records collected from 328 unique patients, and which was annotated by a consensus committee of expert cardiologists (see Methods). Mean age on the test dataset was 70+/−17 years and 38% were female

separate individual cardiologists: 6
With one exception, all sensitivity and specificity pairs were >90%. In addition to a cardiologist consensus committee annotation, each ECG record in the test dataset received annotations from 6 separate individual cardiologists that were not part of the committee (see Methods). Using the committee labels as the gold-standard, we compared the DNN algorithm F1 score to the average individual cardiologist F1 score—which is the harmonic mean of the positive predictive value (precision) and sensitivity (recall) (Table 1b)

individual cardiologists: 6
Using the committee labels as the gold-standard, we compared the DNN algorithm F1 score to the average individual cardiologist F1 score—which is the harmonic mean of the positive predictive value (precision) and sensitivity (recall) (Table 1b). Cardiologist F1 scores are averaged over 6 individual cardiologists. The trend of the DNN’s F1 scores tended to follow that of the averaged cardiologist F1 scores: both had lower F1 on similar classes such as ventricular tachycardia and ectopic atrial rhythm

data: 8761
We performed multiple sensitivity analyses, all of which were consistent with our main results: both AUC and F1 scores on the 10%. development dataset (n=8,761) were materially unchanged from the test dataset results, though were slightly higher (Supplementary Tables 3–4). In addition, we retrained the DNN holding out an additional 10% of the training dataset as a second held-out test dataset (n=8,768), and AUC and F1 scores for all rhythms were materially unchanged (Supplementary Tables 5–6)

data: 8768
development dataset (n=8,761) were materially unchanged from the test dataset results, though were slightly higher (Supplementary Tables 3–4). In addition, we retrained the DNN holding out an additional 10% of the training dataset as a second held-out test dataset (n=8,768), and AUC and F1 scores for all rhythms were materially unchanged (Supplementary Tables 5–6). We note that unlike the primary test dataset which has gold standard annotations from a committee of cardiologists, both sensitivity analysis datasets are annotated by certified ECG technicians

data: 8528
Finally, in order to demonstrate the generalizability of our DNN architecture to external data, we applied our DNN to the 2017 Physionet Challenge data (https://physionet.org/ challenge/2017/) which contained 4 rhythm classes: sinus rhythm, AF, noise and other. Keeping our DNN architecture fixed and without any other hyper-parameter tuning, we trained our DNN on the publicly available training dataset (n=8,528), holding out a 10% development dataset for early stopping. DNN performance on the hidden test dataset (n=3,658) demonstrated overall F1 scores that were among those of the best performers from the competition (Supplementary Table 7)[24], with a class average F1 of 0.83

data: 3658
Keeping our DNN architecture fixed and without any other hyper-parameter tuning, we trained our DNN on the publicly available training dataset (n=8,528), holding out a 10% development dataset for early stopping. DNN performance on the hidden test dataset (n=3,658) demonstrated overall F1 scores that were among those of the best performers from the competition (Supplementary Table 7)[24], with a class average F1 of 0.83. This demonstrates the ability of our end-to-end DNN-based approach to generalize to a new set of rhythm labels on a different dataset

records: 16
Of the rhythm classes we examined, ventricular tachycardia is a clinically important rhythm for which the model had a lower F1 score than cardiologists, but interestingly had higher sensitivity (94.1%) than the averaged cardiologist (78.4%). Manual review of the 16 records misclassified by the DNN as ventricular tachycardia showed that “mistakes” made by the algorithm were very reasonable. For example, ventricular tachycardia and idioventricular rhythm differ only in the heart rate being above or below 100 beats per minute (bpm), respectively

committee-labeled idioventricular rhythm records: 3
For example, ventricular tachycardia and idioventricular rhythm differ only in the heart rate being above or below 100 beats per minute (bpm), respectively. In 7 of the committee-labeled idioventricular rhythm cases, the record contained periods of heart rate ≥100bpm, making ventricular tachycardia a reasonable classification by the DNN; the remaining 3 committee-labeled idioventricular rhythm records had rates close to 100bpm. Of the 5 cases where the committee-label was AF (4) or supraventricular tachycardia (1), all but one displayed aberrant conduction, resulting in wide QRS complexes with a similar appearance to ventricular tachycardia

cases: 5
In 7 of the committee-labeled idioventricular rhythm cases, the record contained periods of heart rate ≥100bpm, making ventricular tachycardia a reasonable classification by the DNN; the remaining 3 committee-labeled idioventricular rhythm records had rates close to 100bpm. Of the 5 cases where the committee-label was AF (4) or supraventricular tachycardia (1), all but one displayed aberrant conduction, resulting in wide QRS complexes with a similar appearance to ventricular tachycardia. If we re-categorize the 7 idioventricular rhythm records with rate ≥100bpm instead as ventricular tachycardia, overall DNN performance on ventricular tachycardia exceeds that of cardiologists by F1 score with a set-level F1 score of 0.82 (vs. 0.77)

idioventricular rhythm records with rate: 7
Of the 5 cases where the committee-label was AF (4) or supraventricular tachycardia (1), all but one displayed aberrant conduction, resulting in wide QRS complexes with a similar appearance to ventricular tachycardia. If we re-categorize the 7 idioventricular rhythm records with rate ≥100bpm instead as ventricular tachycardia, overall DNN performance on ventricular tachycardia exceeds that of cardiologists by F1 score with a set-level F1 score of 0.82 (vs. 0.77). This study has several important limitations

patients: 328
In addition, as revealed in our manual review of discordant predictions, in some cases there remains uncertainty in the correct label. Given the resource intensive nature of cardiologist committee ECG annotation, our test dataset was limited to records from 328 patients; confidence intervals with our test dataset size were acceptably narrow, as we report in Table 1a, though our ability to perform subgroup analysis (such as by age/gender) is limited. Finally, we also note that in order to obtain a sufficient quantity of rare rhythms in our training and test datasets, we targeted patients exhibiting these rhythms during data extraction

board-certified practicing cardiac electrophysiologists: 8
We held-out records from a random 10% of the training dataset patients for use as a development dataset to perform DNN hyper-parameter tuning. Eight board-certified practicing cardiac electrophysiologists and one board-certified practicing cardiologist (all referred to as cardiologists) annotated records in the test dataset. All iRhythm clinical annotations were removed from the test dataset

committees of three members: 3
All iRhythm clinical annotations were removed from the test dataset. Cardiologists were divided into three committees of three members each, and each committee annotated a separate one third of the test dataset (112 records). Cardiologist committees discussed records as a group and annotated by consensus, providing the gold-standard for model evaluation

cardiologists: 6
Cardiologist committees discussed records as a group and annotated by consensus, providing the gold-standard for model evaluation. Each of the remaining 6 cardiologists that were not part of the committee for that record also provided individual annotations for that record. These annotations were used to compare the model’s performance to that of the individual cardiologists

cardiologists: 3
These annotations were used to compare the model’s performance to that of the individual cardiologists. In summary, every record in the test dataset received one committee consensus annotation from a group of 3 cardiologists and 6 individual cardiologist annotations. Many ECG records contained multiple rhythm class diagnoses since the onset and offset of all unique classes were labeled within each 30 second record

samples: 200
The noise label was selected whenever artifact in the signal precluded accurate interpretation of the underlying rhythm. We developed a convolutional DNN to detect arrhythmias (Extended Data Figure 1) which takes as input the raw ECG data (sampled at 200Hz, or 200 samples per second) and outputs one prediction every 256 samples (or every 1.28 seconds), which we call the output interval. The network takes as input only the raw ECG samples and no other patient or ECG related features

samples: 256
To train and evaluate our model on the Physionet challenge data which contains variable length recordings, we made minor modifications to the DNN. Without any change, the DNN can accept as input any record with a length which is a multiple of 256 samples. In order to handle examples which are not a multiple of 256, records were truncated to the nearest multiple

cardiologists: 6
For an aggregate measure of model performance, we computed the class-frequency weighted arithmetic mean for both the F1 score and the AUC. In order to obtain estimates of how the DNN compares to an average cardiologist, cardiologist performance characteristics were averaged across the 6 cardiologists who individually annotated each record. We used confusion matrices to illustrate the specific examples of rhythm classes where the DNN prediction or the individual cardiologist predictions were discordant with the committee consensus at the sequence-level

Reference
  • Schläpfer J & Wellens HJ Computer-Interpreted Electrocardiograms. J. Am. Coll. Cardiol 70, 1183– 1192 (2017). [PubMed: 28838369]
    Locate open access versionFindings
  • LeCun Y, Bengio Y & Hinton G. Deep learning. Nature 521, 436–444 (2015). [PubMed: 26017442] 3. Holst H, Ohlsson M, Peterson C & Edenbrandt L. A confident decision support system for interpreting electrocardiograms. Clin. Physiol 19, 410–418 (1999). [PubMed: 10516892] 4. Schlant RC et al. Guidelines for electrocardiography. A report of the American College of
    Locate open access versionFindings
  • Cardiology/American Heart Association Task Force on Assessment of Diagnostic and Therapeutic Cardiovascular Procedures (Committee on Electrocardiography). J. Am. Coll. Cardiol 19, 473–81 (1992). [PubMed: 1537997] 5.
    Locate open access versionFindings
  • Shah AP & Rubin SA Errors in the computerized electrocardiogram interpretation of cardiac rhythm. J. Electrocardiol 40, 385–390 (2007). [PubMed: 17531257] 6.
    Locate open access versionFindings
  • Guglin ME & Thatai D. Common errors in computer electrocardiogram interpretation. Int. J. Cardiol 106, 232–237 (2006). [PubMed: 16321696] 7.
    Locate open access versionFindings
  • Poon K, Okin PM & Kligfield P. Diagnostic performance of a computer-based ECG rhythm algorithm. J. Electrocardiol 38, 235–238 (2005). [PubMed: 16003708] 8. Amodei D. et al. Deep Speech 2: End-to-End Speech Recognition in English and Mandarin. in Proceedings of the 33rd International Conference on Machine Learning 48, (2016).
    Locate open access versionFindings
  • 9. He K, Zhang X, Ren S & Sun J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. in Proceedings of the IEEE International Conference on Computer Vision 1026–34 (2015).
    Google ScholarLocate open access versionFindings
  • 10. Silver D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016). [PubMed: 26819042] 11. Gulshan V. et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. J. Am. Med. Assoc 304, 649–656 (2016).
    Locate open access versionFindings
  • 12. Esteva A. et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 1–11 (2017). doi:10.1038/nature21056
    Locate open access versionFindings
  • 13. Poungponsri S & Yu X. An adaptive filtering approach for electrocardiogram (ECG) signal noise reduction using neural networks. Neurocomputing 117, 206–213 (2013).
    Google ScholarLocate open access versionFindings
  • 14. Ochoa A, Mena LJ & Felix VG Noise-Tolerant Neural Network Approach for Electrocardiogram Signal Classification. in International Conference on Compute and Data Analysis 277–282 (2017). doi:10.1145/3093241.3093269
    Locate open access versionFindings
  • 15. Mateo J & Rieta JJ Application of artificial neural networks for versatile preprocessing of electrocardiogram recordings. J. Med. Eng. Technol 36, 90–101 (2012). [PubMed: 22268996] 16. Pourbabaee B, Roshtkhari MJ & Khorasani K. Deep Convolutional Neural Networks and Learning ECG Features for Screening Paroxysmal Atrial Fibrillation Patients. IEEE Trans. Syst. Man Cybern. Syst 99, 1–10 (2017).
    Locate open access versionFindings
  • 17. Javadi M, Arani SAAA, Sajedin A & Ebrahimpour R. Biomedical Signal Processing and Control Classification of ECG arrhythmia by a modular neural network based on Mixture of Experts and Negatively Correlated Learning. Biomed. Signal Process. Control 8, 289–296 (2013).
    Google ScholarLocate open access versionFindings
  • 18. Acharya UR et al. A deep convolutional neural network model to classify heartbeats. Comput. Biol. Med 89, 389–396 (2017). [PubMed: 28869899]
    Locate open access versionFindings
  • 19. Banupriya CV & Karpagavalli S. Electrocardiogram Beat Classification using Probabilistic Neural Network. In International Journal of Computer Applications 31–37 (2014).
    Google ScholarLocate open access versionFindings
  • 20. Rahhal M. M. Al et al. Deep learning approach for active classification of electrocardiogram signals. Inf. Sci. (Ny) 345, 340–354 (2016).
    Google ScholarLocate open access versionFindings
  • 21. Acharya UR et al. Automated detection of arrhythmias using different intervals of tachycardia ECG segments with convolutional neural network. Inf. Sci. (Ny) 405, 81–90 (2017).
    Google ScholarLocate open access versionFindings
  • 22. Zihlmann M, Perekrestenko D & Tschannen M. Convolutional Recurrent Neural Networks for Electrocardiogram Classification. Comput. Cardiol 44, (2017).
    Google ScholarLocate open access versionFindings
  • 23. Xiong Z, Stiles M & Zhao J. Robust ECG Signal Classification for the Detection of Atrial Fibrillation Using Novel Neural Networks. Comput. Cardiol 44, (2017).
    Google ScholarLocate open access versionFindings
  • 24. Clifford G. et al. AF Classification from a Short Single Lead ECG Recording: the Physionet Computing in Cardiology Challenge 2017. Comput. Cardiol 44, 1–4 (2017).
    Google ScholarLocate open access versionFindings
  • 25. Teijeiro T, García CA, Castro D & Félix P. Arrhythmia Classification from the Abductive Interpretation of Short Single-Lead ECG Records. Comput. Cardiol 44, (2017).
    Google ScholarLocate open access versionFindings
  • 26. Goldberger AL et al. Physiobank, Physiotoolkit, and Physionet: components of a new research resource for complex physiologic signals. Circulation 101, e215–e220 (2000). [PubMed: 10851218]
    Locate open access versionFindings
  • 27. Turakhia MP et al. Diagnostic utility of a novel leadless arrhythmia monitoring device. Am. J. Cardiol 112, 520–524 (2013). [PubMed: 23672988]
    Locate open access versionFindings
  • 28. Hand DJ & Till RJ A Simple Generalisation of the Area Under the ROC Curve for Multiple Class Classification Problems. Mach. Learn 45, 171–186 (2001).
    Google ScholarLocate open access versionFindings
  • 29. Smith M, Saunders R, Stuckhardt L & McGinnis M. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. (The Institute of Medicine. The National Academies Press, 2012). doi:10.17226/13444
    Findings
  • 30. Lyon A, Minchole A, Martinez JP, Laguna P & Rodriguez B. Computational techniques for ECG analysis and interpretation in light of their contribution to medical advances. J. R. Soc. Interface 15, (2018).
    Google ScholarLocate open access versionFindings
  • 31. Carrara M. et al. Heart rate dynamics distinguish among atrial fibrillation, normal sinus rhythm and sinus rhythm with frequent ectopy Heart rate dynamics distinguish among atrial fibrillation, normal sinus rhythm and sinus rhythm with frequent ectopy. Physiol. Meas 36, 1873–1888 (2015). [PubMed: 26246162]
    Locate open access versionFindings
  • 32. Zhou X, Ding H, Ung B, Pickwell-macpherson E & Zhang Y. Automatic online detection of atrial fibrillation based on symbolic dynamics and Shannon entropy. Biomed. Eng. Online 13, (2014).
    Google ScholarLocate open access versionFindings
  • 33. Hong S. et al. ENCASE: an ENsemble ClASsifiEr for ECG Classification Using Expert Features and Deep Neural Networks. in Computing in Cardiology 44, 2–5 (2017).
    Google ScholarLocate open access versionFindings
  • 34. Nahar J, Imam T, Tickle KS & Chen YP Expert Systems with Applications Computational intelligence for heart disease diagnosis: A medical knowledge driven approach. Expert Syst. Appl 40, 96–104 (2013).
    Google ScholarLocate open access versionFindings
  • 35. Cubanski D, Cyganski D, Antman EM & Feldman CL A Neural Network System for Detection of Atrial Fibrillation in Ambulatory Electrocardiograms. J Cardiovasc Electrophysiol 5, 602–608 (1994). [PubMed: 7987530]
    Locate open access versionFindings
  • 36. Andreotti F, Carr O, Pimentel MAF, Mahdi A & Vos M. De. Comparing Feature-Based Classifiers and Convolutional Neural Networks to Detect Arrhythmia from Short Segments of ECG. In Computing in Cardiology 44, (2017).
    Google ScholarLocate open access versionFindings
  • 37. Xu SS, Mak M & Cheung C. Towards End-to-End ECG Classification with Raw Signal Extraction and Deep Neural Networks. IEEE J. Biomed. Heal. Informatics 14, 1 (2018).
    Google ScholarLocate open access versionFindings
  • 38. Oh S, Ng E, Tan R & Acharya UR Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats. Comput. Biol. Med In Press, (2018).
    Google ScholarLocate open access versionFindings
  • 39. Shashikumar SP, Shah AJ, Clifford GD & Nemati S. Detection of Paroxysmal Atrial Fibrillation using Attention-based Bidirectional Recurrent Neural Networks. in KDD ‘18: The 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, August 19–23, 2018, London, United Kingdom 715–723 (2018). doi:10.1145/3219819.3219912
    Locate open access versionFindings
  • 40. Xia Y, Wulan N, Wang K & Zhang H. Detecting atrial fibrillation by deep convolutional neural networks. Comput. Biol. Med 93, 84–92 (2018). [PubMed: 29291535] Methods-only References
    Locate open access versionFindings
  • 41. He K, Zhang X, Ren S & Sun J. Identity mappings in deep residual networks. in European Conference on Computer Vision 630–645 (2016).
    Google ScholarLocate open access versionFindings
  • 42. Ioffe S & Szegedy C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. Int. Conf. Mach. Learn 37, (2015).
    Google ScholarLocate open access versionFindings
  • 43. He K, Zhang X, Ren S & Sun J. Deep Residual Learning for Image Recognition. in IEEE Conference on Computer Vision and Pattern Recognition 770–8 (2016). doi:10.1109/CVPR. 2016.90
    Locate open access versionFindings
  • 44. Srivastava N, Hinton G, Krizhevsky A, Sutskever I & Salakhutdinov R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res 15, 1929–1958 (2014).
    Google ScholarLocate open access versionFindings
  • 45. Kingma DP & Ba JL Adam: A method for stochastic optimization. in International Conference on Learning Representations 1–15 (2015).
    Google ScholarLocate open access versionFindings
  • 46. Hochreiter S & Schmidhuber J. Long Short-Term Memory. Neural Comput. 9, 1735–1780 (1997). [PubMed: 9377276]
    Locate open access versionFindings
  • 47. Fawcett T. An introduction to ROC analysis. Pattern Recognit. Lett 27, 861–874 (2006).
    Google ScholarLocate open access versionFindings
  • 48. Hanley JA & McNeil BJ A method of comparing the areas under receiver operating characteristic curves derived from the same cases. Radiology 148, 839–43 (1983). [PubMed: 6878708] 49. Evaluating Binary Classifiers on Imbalanced Datasets. PLoS One 10, 1–21 (2015).
    Locate open access versionFindings
Full Text
Your rating :
0

 

Tags
Comments