论文标题
高光谱图像超分辨率通过深度渐进零中心的残余学习
Hyperspectral Image Super-resolution via Deep Progressive Zero-centric Residual Learning
论文作者
论文摘要
本文探讨了高光谱图像(HSI)超级分辨率的问题,该分辨率合并了低分辨率HSI(LR-HSI)和高分辨率的多光谱图像(HR-MSI)。空间和光谱信息的跨模式分布使问题具有挑战性。受经典小波分解的图像融合的启发,我们提出了一个新颖的\ textit {轻量级}基于神经网络的深神网络框架,即渐进零中心的零残留网络(PZRES-NET),以有效地有效地解决了这一问题。具体而言,Pzres-net学习了高分辨率,并且\ textit {Zero-Centric}残留图像从所有频谱频段中包含场景的高频空间细节,从沿光谱层面的渐进式方式中的两个输入。然后将所得的残留图像叠加到以\ textIt {mean-value不变性}方式中叠加到上采样的LR-HSI上,从而导致了粗糙的HR-HSI,这通过同时探索所有光谱频段的相干性进一步完善。为了有效,有效地学习残留图像,我们采用光谱空间可分离卷积,并具有密集的连接。此外,我们提出在每层特征图上实现的零均值归一化,以实现残留图像的零均值特征。 Extensive experiments over both real and synthetic benchmark datasets demonstrate that our PZRes-Net outperforms state-of-the-art methods to a \textit{significant} extent in terms of both 4 quantitative metrics and visual quality, e.g., our PZRes-Net improves the PSNR more than 3dB, while saving 2.3$\times$ parameters and consuming 15$\times$ less FLOPs.该代码可在https://github.com/zbzhzhy/pzres-net上公开获取。
This paper explores the problem of hyperspectral image (HSI) super-resolution that merges a low resolution HSI (LR-HSI) and a high resolution multispectral image (HR-MSI). The cross-modality distribution of the spatial and spectral information makes the problem challenging. Inspired by the classic wavelet decomposition-based image fusion, we propose a novel \textit{lightweight} deep neural network-based framework, namely progressive zero-centric residual network (PZRes-Net), to address this problem efficiently and effectively. Specifically, PZRes-Net learns a high resolution and \textit{zero-centric} residual image, which contains high-frequency spatial details of the scene across all spectral bands, from both inputs in a progressive fashion along the spectral dimension. And the resulting residual image is then superimposed onto the up-sampled LR-HSI in a \textit{mean-value invariant} manner, leading to a coarse HR-HSI, which is further refined by exploring the coherence across all spectral bands simultaneously. To learn the residual image efficiently and effectively, we employ spectral-spatial separable convolution with dense connections. In addition, we propose zero-mean normalization implemented on the feature maps of each layer to realize the zero-mean characteristic of the residual image. Extensive experiments over both real and synthetic benchmark datasets demonstrate that our PZRes-Net outperforms state-of-the-art methods to a \textit{significant} extent in terms of both 4 quantitative metrics and visual quality, e.g., our PZRes-Net improves the PSNR more than 3dB, while saving 2.3$\times$ parameters and consuming 15$\times$ less FLOPs. The code is publicly available at https://github.com/zbzhzhy/PZRes-Net .