Optimizing risk-based breast cancer screening policies with reinforcement learning

semanticscholar(2021)

引用 21|浏览13
暂无评分
摘要
Screening programs must balance the benefits of early detection against the costs of over screening. Achieving this goal relies on two complementary technologies: (1) the ability to assess patient risk, (2) the ability to develop personalized screening programs given that risk. While methodologies for assessing patient risk have significantly improved with new advances in deep learning applied to imaging and genetics, our ability to personalize screening policies still lags behind. Here, we introduce a novel reinforcement learning-based framework for personalized screening, Tempo, and demonstrate its efficacy in the context of breast cancer. We trained our risk-based screening policies on a large screening mammography dataset from Massachusetts General Hospital (MGH) USA and validated them on held-out patients from MGH, and on external datasets from Emory USA, Karolinska Sweden and Chang Gung Memorial Hospital (CGMH) Taiwan. Across all test sets, we found that a Tempo policy combined with an image-based AI risk model was significantly more efficient than current regimes used in clinical practice in terms of simulated early detection per screen frequency. Moreover, we showed that the same Tempo policy can be easily adapted to a wide range of possible screening preferences, allowing clinicians to select their desired early detection to screening cost trade-off without training a new policy. Finally, we demonstrated Tempo policies based on AI-based risk models out performed Tempo policies based on less accurate clinical risk models. Altogether, our results show that pairing AI-based risk models with agile AI-designed screening policies has the potential to improve screening programs, advancing early detection while reducing over-screening.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要