A Multi-Center Study on the Adaptability of a Shared Foundation Model for Electronic Health Records
arxiv(2023)
摘要
Foundation models hold promise for transforming AI in healthcare by providing
modular components that are easily adaptable to downstream healthcare tasks,
making AI development more scalable and cost-effective. Structured EHR
foundation models, trained on coded medical records from millions of patients,
demonstrated benefits including increased performance with fewer training
labels, and improved robustness to distribution shifts. However, questions
remain on the feasibility of sharing these models across different hospitals
and their performance for local task adaptation. This multi-center study
examined the adaptability of a recently released structured EHR foundation
model (FM_SM), trained on longitudinal medical record data from 2.57M
Stanford Medicine patients. Experiments were conducted using EHR data at The
Hospital for Sick Children and MIMIC-IV. We assessed both adaptability via
continued pretraining on local data, and task adaptability compared to
baselines of training models from scratch at each site, including a local
foundation model. We evaluated the performance of these models on 8 clinical
prediction tasks. In both datasets, adapting the off-the-shelf FM_SM
matched the performance of GBM models locally trained on all data while
providing a 13
With continued pretraining on local data, label efficiency substantially
improved, such that FM_SM required fewer than 1
match the fully trained GBM's performance. Continued pretraining was also 60 to
90
Our findings show that adapting shared EHR foundation models across hospitals
provides improved prediction performance at less cost, underscoring the utility
of base foundation models as modular components to streamline the development
of healthcare AI.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要