DANT-GAN: A dual attention-based of nested training network for infrared and visible image fusion

Kaixin Li,Gang Liu,Xinjie Gu,Haojie Tang, Jinxin Xiong, Yao Qian

DIGITAL SIGNAL PROCESSING(2024)

引用 0|浏览2
暂无评分
摘要
Existing infrared and visible image fusion methods based on attention mechanisms can perceive the most discriminative regions of the two images. However, the disadvantage is that too many weights are assigned to the area of interest, so some details in the local region are ignored. To solve this problem, we propose a Dual Attention-based Nested Training Network (DANT-GAN), which uses two attention mechanisms to extract and fuse features for local regions and the whole image, respectively. The training process of the entire image is embedded in the training process of the local area of the image, and the training result of the local attention mechanism in each epoch is used as the input to extract and fuse the features of the mixed domain of the whole image. In this way, the information of the attention region can be preserved, and the information lost in the feature extraction phase can be compensated. In addition, the training process is accelerated by increasing the link of generating the teaching network, prompting the model to learn to simulate the fusion ground truth. Experiments show that DANT-GAN can well capture the local and overall attention characteristics of images and describe them on an image. The fusion result can well maintain information in source images, avoids the defect of losing details caused by fusion relying on single attention, and requires fewer computing resources. We compare other state-of-the-art fusion methods on two public datasets, achieving optimal values on SCD, CC, and SSIM. Finally, the effectiveness of the proposed method is verified by ablation experiments.
更多
查看译文
关键词
Image fusion,Attention mechanism,Generate teaching network,Generative adversarial networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要