论文标题
高光谱图像超分辨率通过深空格卷积神经网络
Hyperspectral Image Super-resolution via Deep Spatio-spectral Convolutional Neural Networks
论文作者
论文摘要
高光谱图像至关重要,以便更好地了解不同材料的特征。为了实现这一目标,他们利用了大量的光谱频带。但是,与传统的多光谱图像系统相比,这种有趣的特征通常是通过降低的空间分辨率来支付的。为了减轻这一问题,我们为深度卷积神经网络提供了一个简单有效的体系结构,以融合低分辨率的高光谱图像(LR-HSI)和高分辨率的多光谱图像(HR-MSI),从而产生高分辨率的高光谱高光谱图像(HR-HSI)。该网络旨在通过两个折叠的架构来保留空间和光谱信息:一个是在不同的规模上利用HR-HSI,以获取具有满意的光谱保存的输出;另一个是应用多分辨率分析的概念来提取高频信息,旨在输出高质量的空间细节。最后,使用平均平方误差损失函数来测量训练期间的性能。广泛的实验表明,与最近的最先进的高度高度图像超分辨率方法相比,所提出的网络体系结构(定性和定量上都取得了最佳性能(无论是定性还是定量)。此外,可以通过使用所提出的方法来指出其他重要优势,例如,更好的网络概括能力,有限的计算负担以及相对于培训样本数量的稳健性。
Hyperspectral images are of crucial importance in order to better understand features of different materials. To reach this goal, they leverage on a high number of spectral bands. However, this interesting characteristic is often paid by a reduced spatial resolution compared with traditional multispectral image systems. In order to alleviate this issue, in this work, we propose a simple and efficient architecture for deep convolutional neural networks to fuse a low-resolution hyperspectral image (LR-HSI) and a high-resolution multispectral image (HR-MSI), yielding a high-resolution hyperspectral image (HR-HSI). The network is designed to preserve both spatial and spectral information thanks to an architecture from two folds: one is to utilize the HR-HSI at a different scale to get an output with a satisfied spectral preservation; another one is to apply concepts of multi-resolution analysis to extract high-frequency information, aiming to output high quality spatial details. Finally, a plain mean squared error loss function is used to measure the performance during the training. Extensive experiments demonstrate that the proposed network architecture achieves best performance (both qualitatively and quantitatively) compared with recent state-of-the-art hyperspectral image super-resolution approaches. Moreover, other significant advantages can be pointed out by the use of the proposed approach, such as, a better network generalization ability, a limited computational burden, and a robustness with respect to the number of training samples.