Versatile Medical Image Segmentation Learned from Multi-Source Datasets via Model Self-Disambiguation
CVPR 2024(2023)
摘要
A versatile medical image segmentation model applicable to imaging data
collected with diverse equipment and protocols can facilitate model deployment
and maintenance. However, building such a model typically requires a large,
diverse, and fully annotated dataset, which is rarely available due to the
labor-intensive and costly data curation. In this study, we develop a
cost-efficient method by harnessing readily available data with partially or
even sparsely annotated segmentation labels. We devise strategies for model
self-disambiguation, prior knowledge incorporation, and imbalance mitigation to
address challenges associated with inconsistently labeled data from various
sources, including label ambiguity and imbalances across modalities, datasets,
and segmentation labels. Experimental results on a multi-modal dataset compiled
from eight different sources for abdominal organ segmentation have demonstrated
our method's effectiveness and superior performance over alternative
state-of-the-art methods, highlighting its potential for optimizing the use of
existing annotated data and reducing the annotation efforts for new data to
further enhance model capability.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要