论文标题
通过具有明确相似性约束的频率吸引融合网络的pansharpenting
Pansharpening via Frequency-Aware Fusion Network with Explicit Similarity Constraints
论文作者
论文摘要
融合高空间分辨率(HR)Panchrostic(PAN)图像和低空间分辨率(LR)多光谱(MS)图像以获得HRMS图像的过程被称为Pansharpening。随着卷积神经网络的发展,pansharpening方法的性能得到了改善,但是,由于细节学习的不足和MSAND PAN之间的频率不匹配,其融合结果的模糊效应和光谱失真仍然存在。因此,在减少光谱失真的前提下,空间细节的改善仍然是一个挑战。在本文中,我们提出了一个频率吸引的融合网络(FAFNET),以及与上述问题的新型高频相似性损失。 FAFNET主要由两种块组成,其中频率感知块的目的是借助离散小波变换(DWT)层提取频域中的特征,而频率融合块重建并在频域中重建并在空间域转换为空间域,并借助逆DWT(IDWT)层。最后,融合结果是通过卷积块获得的。为了学习对应关系,我们还提出了一个高频特征相似性损失,以限制PAN和MS分支的HF特征,以便可以合理地使用PAN的HF特征来补充MS的特征。与几种最先进的Pansharpening模型相比,在降低和全分辨率下的三个数据集上的实验结果证明了该方法的优越性。
The process of fusing a high spatial resolution (HR) panchromatic (PAN) image and a low spatial resolution (LR) multispectral (MS) image to obtain an HRMS image is known as pansharpening. With the development of convolutional neural networks, the performance of pansharpening methods has been improved, however, the blurry effects and the spectral distortion still exist in their fusion results due to the insufficiency in details learning and the frequency mismatch between MSand PAN. Therefore, the improvement of spatial details at the premise of reducing spectral distortion is still a challenge. In this paper, we propose a frequency-aware fusion network (FAFNet) together with a novel high-frequency feature similarity loss to address above mentioned problems. FAFNet is mainly composed of two kinds of blocks, where the frequency aware blocks aim to extract features in the frequency domain with the help of discrete wavelet transform (DWT) layers, and the frequency fusion blocks reconstruct and transform the features from frequency domain to spatial domain with the assistance of inverse DWT (IDWT) layers. Finally, the fusion results are obtained through a convolutional block. In order to learn the correspondence, we also propose a high-frequency feature similarity loss to constrain the HF features derived from PAN and MS branches, so that HF features of PAN can reasonably be used to supplement that of MS. Experimental results on three datasets at both reduced- and full-resolution demonstrate the superiority of the proposed method compared with several state-of-the-art pansharpening models.