Matching-space Stereo Networks for Cross-domain Generalization

2020 International Conference on 3D Vision (3DV)(2020)

引用 25|浏览84
暂无评分
摘要
End-to-end deep networks represent the state of the art for stereo matching. While excelling on images framing environments similar to the training set, major drops in accuracy occur in unseen domains (e.g., when moving from synthetic to real scenes). In this paper we introduce a novel family of architectures, namely Matching-Space Networks (MS-Nets), with improved generalization properties. By replacing learning-based feature extraction from image RGB values with matching functions and confidence measures from conventional wisdom, we move the learning process from the color space to the Matching Space, avoiding over-specialization to domain specific features. Extensive experimental results on four real datasets highlight that our proposal leads to superior generalization to unseen environments over conventional deep architectures, keeping accuracy on the source domain almost unaltered. Our code is available at https://qithub.com/ccj5351/MS-Nets.
更多
查看译文
关键词
cross-domain generalization,end-to-end deep networks,stereo matching,improved generalization properties,learning-based feature extraction,image RGB values,matching functions,color space,domain specific features,conventional deep architectures,source domain,matching-space stereo networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要