Cross-Modality Translation with Generative Adversarial Networks to Unveil Alzheimer's Disease Biomarkers
arxiv(2024)
摘要
Generative approaches for cross-modality transformation have recently gained
significant attention in neuroimaging. While most previous work has focused on
case-control data, the application of generative models to disorder-specific
datasets and their ability to preserve diagnostic patterns remain relatively
unexplored. Hence, in this study, we investigated the use of a generative
adversarial network (GAN) in the context of Alzheimer's disease (AD) to
generate functional network connectivity (FNC) and T1-weighted structural
magnetic resonance imaging data from each other. We employed a cycle-GAN to
synthesize data in an unpaired data transition and enhanced the transition by
integrating weak supervision in cases where paired data were available. Our
findings revealed that our model could offer remarkable capability, achieving a
structural similarity index measure (SSIM) of 0.89 ± 0.003 for T1s and a
correlation of 0.71 ± 0.004 for FNCs. Moreover, our qualitative analysis
revealed similar patterns between generated and actual data when comparing AD
to cognitively normal (CN) individuals. In particular, we observed
significantly increased functional connectivity in cerebellar-sensory motor and
cerebellar-visual networks and reduced connectivity in cerebellar-subcortical,
auditory-sensory motor, sensory motor-visual, and cerebellar-cognitive control
networks. Additionally, the T1 images generated by our model showed a similar
pattern of atrophy in the hippocampal and other temporal regions of Alzheimer's
patients.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要