Attention-Guided Multi-Scale Fusion Network for Similar Objects Semantic Segmentation

Cogn. Comput.(2023)

引用 0|浏览0
暂无评分
摘要
Image segmentation accuracy is critical in marine ecological detection utilizing unmanned aerial vehicles (UAVs). By flying a drone around, we can swiftly determine the location of a variety of species. However, remote sensing photos, particularly those of inter-class items, are remarkably similar, and there are a significant number of little objects. The universal segmentation network is ineffective. This research constructs attentional networks that imitate the human cognitive system, inspired by camouflaged object detection and the management of human attentional mechanisms in the recognition of diverse things. This research proposes TriseNet, an attention-guided multi-scale fusion semantic segmentation network that solves the challenges of high item similarity and poor segmentation accuracy in UAV settings. To begin, we employ a bidirectional feature extraction network to extract low-level spatial and high-level semantic information. Second, we leverage the attention-induced cross-level fusion module (ACFM) to create a new multi-scale fusion branch that performs cross-level learning and enhances the representation of inter-class comparable objects. Finally, the receptive field block (RFB) module is used to increase the receptive field, resulting in richer characteristics in specific layers. The inter-class similarity increases the difficulty of segmentation accuracy greatly, whereas the three modules improve feature expression and segmentation results. Experiments are conducted using our UAV dataset, UAV-OUC-SEG (55.61% MIoU), and the public dataset, Cityscapes (76.10% MIoU), to demonstrate the efficacy of our strategy. In two datasets, the TriseNet delivers the best results when compared to other prominent segmentation algorithms.
更多
查看译文
关键词
Semantic segmentation,Attention-guided,Multi-scale fusion,High inter-class similarity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要