Multimodal Co-learning: A Domain Adaptation Method for Building Extraction from Optical Remote Sensing Imagery.

JURSE(2023)

引用 0|浏览2
暂无评分
摘要
In this paper, we aim to improve the transfer learning ability of 2D convolutional neural networks (CNNs) for building extraction from optical imagery and digital surface models (DSMs) using a 2D-3D co-learning framework. Unlabeled target domain data are incorporated as unlabeled training data pairs to optimize the training procedure. Our framework adaptively transfers unsupervised mutual information between the 2D and 3D modality (i.e., DSM-derived point clouds) during the training phase via a soft connection, utilizing a predefined loss function. Experimental results from a spaceborne-to-airborne cross-domain case demonstrate that the framework we present can quantitatively and qualitatively improve the testing results for building extraction from single-modality optical images.
更多
查看译文
关键词
building extraction,multimodal data,co-learning,domain adaptation,transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要