Robust Multi-modal 3D Patient Body Modeling.

medical image computing and computer-assisted intervention(2020)

引用 4|浏览63
暂无评分
摘要
This paper considers the problem of 3D patient body modeling. Such a 3D model provides valuable information for improving patient care, streamlining clinical workflow, automated parameter optimization for medical devices etc. With the popularity of 3D optical sensors and the rise of deep learning, this problem has seen much recent development. However, existing art is mostly constrained by requiring specific types of sensors as well as limited data and labels, making them inflexible to be ubiquitously used across various clinical applications. To address these issues, we present a novel robust dynamic fusion technique that facilitates flexible multi-modal inference, resulting in accurate 3D body modeling even when the input sensor modality is only a subset of the training modalities. This leads to a more scalable and generic framework that does not require repeated application-specific data collection and model retraining, hence achieving an important flexibility towards developing cost-effective clinically-deployable machine learning models. We evaluate our method on several patient positioning datasets and demonstrate its efficacy compared to competing methods, even showing robustness in challenging patient-under-the-cover clinical scenarios.
更多
查看译文
关键词
modeling,3d,body,multi-modal
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要