论文标题

通过残留神经元注意网络的单图像超分辨率

Single Image Super-Resolution via Residual Neuron Attention Networks

论文作者

Ai, Wenjie, Tu, Xiaoguang, Cheng, Shilei, Xie, Mei

论文摘要

深度卷积神经网络(DCNN)在单像超分辨率(SISR)中取得了令人印象深刻的性能。为了进一步提高性能,现有的基于CNN的方法通常专注于设计网络的更深层次体系结构。但是,我们认为盲目地增加网络的深度不是最明智的方法。在本文中,我们提出了一种新型的端到端残留神经元注意网络(RNAN),以提高效率和有效的SISR。从结构上讲,我们的RNAN是设计良好的全局上下文增强残差组(GCRGS)的顺序整合,该组合从粗糙到细细提取了超级分辨的特征。我们的GCRG设计了两个新颖性。首先,在GCRG的每个块中提出了残留的神经元注意(RNA)机制,以揭示神经元对更好特征表示的相关性。此外,在每个GCRG结束时,全局上下文(GC)块被嵌入RNAN中,以有效地对全局上下文信息进行建模。实验结果表明,我们的RNAN通过最先进的方法在定量指标和视觉质量方面实现了可比的结果,但是,通过简化的网络体系结构。

Deep Convolutional Neural Networks (DCNNs) have achieved impressive performance in Single Image Super-Resolution (SISR). To further improve the performance, existing CNN-based methods generally focus on designing deeper architecture of the network. However, we argue blindly increasing network's depth is not the most sensible way. In this paper, we propose a novel end-to-end Residual Neuron Attention Networks (RNAN) for more efficient and effective SISR. Structurally, our RNAN is a sequential integration of the well-designed Global Context-enhanced Residual Groups (GCRGs), which extracts super-resolved features from coarse to fine. Our GCRG is designed with two novelties. Firstly, the Residual Neuron Attention (RNA) mechanism is proposed in each block of GCRG to reveal the relevance of neurons for better feature representation. Furthermore, the Global Context (GC) block is embedded into RNAN at the end of each GCRG for effectively modeling the global contextual information. Experiments results demonstrate that our RNAN achieves the comparable results with state-of-the-art methods in terms of both quantitative metrics and visual quality, however, with simplified network architecture.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源