论文标题

红外和可见的图像融合通过交互式补偿性注意对抗性学习

Infrared and Visible Image Fusion via Interactive Compensatory Attention Adversarial Learning

论文作者

Wang, Zhishe, Shao, Wenyu, Chen, Yanlin, Xu, Jiawei, Zhang, Xiaoqin

论文摘要

现有的生成对抗融合方法通常会连接源图像,并通过卷积操作提取局部特征,而无需考虑其全局特征,从而倾向于产生不平衡的结果,并且对红外图像或可见图像有偏见。为此,我们提出了一种基于生成对抗训练的新型端到端模式,以实现更好的融合平衡,称为\ textit {互动补偿性注意融合网络}(ICAFUSION)。特别是,在发电机中,我们构建了具有三重路径的多级编码码头编码网络,并采用红外和可见路径来提供其他强度和梯度信息。此外,我们开发了交互式和补偿性注意模块以传达其路径信息,并建模其长期依赖性以产生注意图,这可以更集中于红外目标感知和可见的细节表征,并进一步提高特征提取和特征重建功能的表示能力。此外,双重判别器旨在识别融合结果和源图像之间的相似分布,并优化了发电机以产生更平衡的结果。广泛的实验表明,我们的ICAfusion获得了出色的融合性能和更好的概括能力,这在主观视觉描述和客观度量评估中的其他高级方法之前。我们的代码将在\ url {https://github.com/zhishe-wang/icafusion}上公开

The existing generative adversarial fusion methods generally concatenate source images and extract local features through convolution operation, without considering their global characteristics, which tends to produce an unbalanced result and is biased towards the infrared image or visible image. Toward this end, we propose a novel end-to-end mode based on generative adversarial training to achieve better fusion balance, termed as \textit{interactive compensatory attention fusion network} (ICAFusion). In particular, in the generator, we construct a multi-level encoder-decoder network with a triple path, and adopt infrared and visible paths to provide additional intensity and gradient information. Moreover, we develop interactive and compensatory attention modules to communicate their pathwise information, and model their long-range dependencies to generate attention maps, which can more focus on infrared target perception and visible detail characterization, and further increase the representation power for feature extraction and feature reconstruction. In addition, dual discriminators are designed to identify the similar distribution between fused result and source images, and the generator is optimized to produce a more balanced result. Extensive experiments illustrate that our ICAFusion obtains superior fusion performance and better generalization ability, which precedes other advanced methods in the subjective visual description and objective metric evaluation. Our codes will be public at \url{https://github.com/Zhishe-Wang/ICAFusion}

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源