MultiSeg: Semantically Meaningful, Scale-Diverse Segmentations From Minimal User Input

2019 IEEE/CVF International Conference on Computer Vision (ICCV)(2019)

引用 36|浏览91
暂无评分
摘要
Existing deep learning-based interactive image segmentation approaches typically assume the target-of-interest is always a single object and fail to account for the potential diversity in user expectations, thus requiring excessive user input when it comes to segmenting an object part or a group of objects instead. Motivated by the observation that the object part, full object, and a collection of objects essentially differ in size, we propose a new concept called scale-diversity, which characterizes the spectrum of segmentations w.r.t. different scales. To address this, we present MultiSeg, a scale-diverse interactive image segmentation network that incorporates a set of two-dimensional scale priors into the model to generate a set of scale-varying proposals that conform to the user input. We explicitly encourage segmentation diversity during training by synthesizing diverse training samples for a given image. As a result, our method allows the user to quickly locate the closest segmentation target for further refinement if necessary. Despite its simplicity, experimental results demonstrate that our proposed model is capable of quickly producing diverse yet plausible segmentation outputs, reducing the user interaction required, especially in cases where many types of segmentations (object parts or groups) are expected.
更多
查看译文
关键词
minimal user,deep learning-based interactive image segmentation approaches,target-of-interest,potential diversity,user expectations,excessive user input,object part,scale-diversity,MultiSeg,scale-diverse interactive image segmentation network,scale-varying proposals,segmentation diversity,segmentation target,user interaction,semantically meaningful,scale-diverse segmentations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要