A weight induced contrast map for infrared and visible image fusion

Computers and Electrical Engineering(2024)

引用 0|浏览0
暂无评分
摘要
Fusion involves merging details from both infrared (IR) and visible images to generate a unified composite image that offers richer and more valuable information than either individual image. Surveillance, navigation, remote sensing, and military applications require various imaging modalities, including visible and IR, to oversee specific scenes. These sensors provide supplementary data and improve situational understanding, so it is essential to fuse this information into a single image. Fusing IR and visible images presents several challenges due to the differences in imaging modalities, data characteristics, and the need for accurate and meaningful integration of information. In this context, a novel image fusion architecture focuses on enhancing prominent targets, with the objective of integrating thermal information from infrared images into visible images while preserving textural details within the visible images. Initially, in the proposed algorithm, the images from different sensors are divided into components of high and low frequencies using a Guided filter and an Average filter, respectively. A unique contrast detection mechanism is proposed that is capable of preserving the contrast information from the original images. Further, the contrast details of the IR and visible images are enhanced using local standard deviation filtering and local range filtering, respectively. We have developed a new weight map construction strategy that can effectively preserve the supplemental data from both the original images. These weights and gradient details of the source images are utilized to preserve the salient feature details of the images acquired from the various modalities. A decision-making approach is utilized among the high-frequency components of the original images to retain the prominent feature details of the source images. Finally, the salient feature details and the prominent feature details are integrated to generate the fused image. The developed technique is validated using both subjective and quantitative perspectives. The developed approaches provide EN, MI, Nabf, and SD of 6.86815, 13.73269, 0.15390, and 78.16158 respectively against deep learning-based approaches. Also, the proposed algorithm provides EN, MI, Nabf, FMIw, and Qabf against 6.86815, 13.73269, 0.15390, 0.41634 and 0.47196 respectively against existing traditional fusion methods. It is observed that the developed technique provides adequate accuracy against twenty-seven state-of-the-art techniques.
更多
查看译文
关键词
Infrared image,Visible image,Image decomposition,Contrast detection map,Weight map
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要