论文标题
didfuse:红外和可见图像融合的深图像分解
DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion
论文作者
论文摘要
红外且可见的图像融合是图像处理领域中的热门话题,旨在获得融合的图像,以保持源图像的优势。本文提出了一个基于新型的自动编码器(AE)的融合网络。核心思想是,编码器分别将图像分解为背景和细节特征图,分别使用低频和高频信息,并且解码器恢复了原始图像。为此,损耗函数使源图像的背景/详细信息图相似/不同。在测试阶段,背景和细节特征图分别通过融合模块合并,并由解码器恢复融合图像。定性和定量结果表明,我们的方法可以生成融合图像,其中包含突出显示目标和丰富的细节纹理信息,具有强大的鲁棒性,同时超越了最新方法(SOTA)方法。
Infrared and visible image fusion, a hot topic in the field of image processing, aims at obtaining fused images keeping the advantages of source images. This paper proposes a novel auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into background and detail feature maps with low- and high-frequency information, respectively, and that the decoder recovers the original image. To this end, the loss function makes the background/detail feature maps of source images similar/dissimilar. In the test phase, background and detail feature maps are respectively merged via a fusion module, and the fused image is recovered by the decoder. Qualitative and quantitative results illustrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong robustness and meanwhile surpass state-of-the-art (SOTA) approaches.