Multimodal Neural Networks: Rgb-D For Semantic Segmentation And Object Detection

IMAGE ANALYSIS, SCIA 2017, PT I(2017)

引用 29|浏览122
暂无评分
摘要
This paper presents a novel multi-modal CNN architecture that exploits complementary input cues in addition to sole color information. The joint model implements a mid-level fusion that allows the network to exploit cross-modal interdependencies already on a medium feature-level. The benefit of the presented architecture is shown for the RGB-D image understanding task. So far, state-of-the-art RGB-D CNNs have used network weights trained on color data. In contrast, a superior initialization scheme is proposed to pre-train the depth branch of the multi-modal CNN independently. In an end-to-end training the network parameters are optimized jointly using the challenging Cityscapes dataset. In thorough experiments, the effectiveness of the proposed model is shown. Both, the RGB GoogLeNet and further RGB-D baselines are outperformed with a significant margin on two different task: semantic segmentation and object detection. For the latter, this paper shows how to extract object-level groundtruth from the instance level annotations in Cityscapes in order to train a powerful object detector.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要