xMUDA: Cross-Modal Unsupervised Domain Adaptation for 3D Semantic Segmentation

CVPR(2020)

引用 171|浏览333
暂无评分
摘要
Unsupervised Domain Adaptation (UDA) is crucial to tackle the lack of annotations in a new domain. There are many multi-modal datasets, but most UDA approaches are uni-modal. In this work, we explore how to learn from multi-modality and propose cross-modal UDA (xMUDA) where we assume the presence of 2D images and 3D point clouds for 3D semantic segmentation. This is challenging as the two input spaces are heterogeneous and can be impacted differently by domain shift. In xMUDA, modalities learn from each other through mutual mimicking, disentangled from the segmentation objective, to prevent the stronger modality from adopting false predictions from the weaker one. We evaluate on new UDA scenarios including day-to-night, country-to-country and dataset-to-dataset, leveraging recent autonomous driving datasets. xMUDA brings large improvements over uni-modal UDA on all tested scenarios, and is complementary to state-of-the-art UDA techniques.
更多
查看译文
关键词
3D point clouds,3D semantic segmentation,domain shift,segmentation objective,stronger modality,dataset-to-dataset,recent autonomous driving datasets,uni-modal UDA,state-of-the-art UDA techniques,cross-modal unsupervised Domain Adaptation,multimodal datasets,UDA approaches,multimodality,cross-modal UDA,UDA scenarios,country-to-country,xMUDA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要