Method for Generating Synthetic Data Combining Chest Radiography Images with Tabular Clinical Information Using Dual Generative Models

Tomohiro Kikuchi,Shouhei Hanaoka,Takahiro Nakao, Tomomi Takenaga,Yukihiro Nomura, Hiromu Mori,Tsuneo Yoshikawa

arXiv (Cornell University)(2023)

引用 0|浏览0
暂无评分
摘要
The generation of synthetic medical records using Generative Adversarial Networks (GANs) is becoming crucial for addressing privacy concerns and facilitating data sharing in the medical domain. In this paper, we introduce a novel method to create synthetic hybrid medical records that combine both image and non-image data, utilizing an auto-encoding GAN (alphaGAN) and a conditional tabular GAN (CTGAN). Our methodology encompasses three primary steps: I) Dimensional reduction of images in a private dataset (pDS) using the pretrained encoder of the {\alpha}GAN, followed by integration with the remaining non-image clinical data to form tabular representations; II) Training the CTGAN on the encoded pDS to produce a synthetic dataset (sDS) which amalgamates encoded image features with non-image clinical data; and III) Reconstructing synthetic images from the image features using the alphaGAN's pretrained decoder. We successfully generated synthetic records incorporating both Chest X-Rays (CXRs) and thirteen non-image clinical variables (comprising seven categorical and six numeric variables). To evaluate the efficacy of the sDS, we designed classification and regression tasks and compared the performance of models trained on pDS and sDS against the pDS test set. Remarkably, by leveraging five times the volume of sDS for training, we achieved classification and regression results that were comparable, if slightly inferior, to those obtained using the native pDS. Our method holds promise for publicly releasing synthetic datasets without undermining the potential for secondary data usage.
更多
查看译文
关键词
dual generative models,tabular clinical information,radiography
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要