论文标题
动态场景基于连续的跨层注意力传播
Dynamic Scene Deblurring Based on Continuous Cross-Layer Attention Transmission
论文作者
论文摘要
使用注意机制的深度卷积神经网络(CNN)在动态场景中取得了巨大的成功。在大多数这些网络中,只有注意力图精炼的功能才能传递到下一层,并且不同层的注意图彼此分开,这并不能完全利用来自CNN中不同层的注意信息。为了解决这个问题,我们引入了一种新的连续跨层注意传播(CCLAT)机制,该机制可以利用所有卷积层中的分层注意信息。基于CCLAT机制,我们使用非常简单的注意模块来构建一个新型的残留致密融合块(RDAFB)。在RDAFB中,从前面RDAFB的输出和每一层直接连接到随后的rDAFB的注意图,从而导致了CCLAT机制。以RDAFB为构建块,我们为动态场景Deblurring设计了一个名为RDAFNET的有效架构。基准数据集上的实验表明,所提出的模型的表现优于最先进的脱毛方法,并证明了CCLAT机制的有效性。
The deep convolutional neural networks (CNNs) using attention mechanism have achieved great success for dynamic scene deblurring. In most of these networks, only the features refined by the attention maps can be passed to the next layer and the attention maps of different layers are separated from each other, which does not make full use of the attention information from different layers in the CNN. To address this problem, we introduce a new continuous cross-layer attention transmission (CCLAT) mechanism that can exploit hierarchical attention information from all the convolutional layers. Based on the CCLAT mechanism, we use a very simple attention module to construct a novel residual dense attention fusion block (RDAFB). In RDAFB, the attention maps inferred from the outputs of the preceding RDAFB and each layer are directly connected to the subsequent ones, leading to a CCLAT mechanism. Taking RDAFB as the building block, we design an effective architecture for dynamic scene deblurring named RDAFNet. The experiments on benchmark datasets show that the proposed model outperforms the state-of-the-art deblurring approaches, and demonstrate the effectiveness of CCLAT mechanism.