MDAFNet: Monocular Depth-Assisted Fusion Networks for Semantic Segmentation of Complex Urban Remote Sensing Data
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM(2023)
Abstract
This work proposes an end-to-end Monocular Depth-Assisted Fusion Network (MDAFNet) for semantic segmentation of complex urban remote sensing data. The proposed MDAFNet consists of a Monocular Depth Estimation Network (MDENet) and a Crossmodal Fusion Network (CFNet). More specifically, the MDENet first generates the earth surface depth information while the CFNet fuses the generated depth information and RGB images to address the segmentation task. In particular, the MDENet is capable of effectively extracting features of the ground surface while overcoming artifacts such as building shadows. Furthermore, the CFNet is designed to perform segmentation by extracting and fusing semantic information from generated depth information and Red-Green-Blue (RGB) images. Extensive experiments performed on a large-scale fine-resolution remote sensing dataset named the ISPRS Vaihingen confirm that the proposed MDAFNet outperforms conventional crossmodal models equipped with Digital Surface Model information.
MoreTranslated text
Key words
CFNet,complex urban remote sensing data,Crossmodal Fusion Network,Digital Surface Model information,earth surface depth information,end-to-end Monocular Depth-Assisted Fusion Network,generated depth information,large-scale fine-resolution remote sensing dataset,MDAFNet,MDENet,Monocular Depth Estimation Network,Monocular Depth-Assisted Fusion networks,Red-Green-Blue images,segmentation task,semantic information,semantic segmentation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined