论文标题

部分可观测时空混沌系统的无模型预测

WaveNets: Wavelet Channel Attention Networks

论文作者

Salman, Hadi, Parks, Caleb, Hong, Shi Yin, Zhan, Justin

论文摘要

频道注意力统治着至高无上,作为计算机视觉领域的一种有效技术。但是,SENET提出的渠道关注遭受了通过使用全球平均池(GAP)代表频道作为标量的特征学习中信息损失。因此,设计有效的通道注意机制需要找到一种解决方案,以增强建模通道相互依赖性中的特征。在这项工作中,我们利用小波变换压缩作为解决通道表示问题的解决方案。我们首先测试小波变换是配备常规通道注意模块的自动编码器模型。接下来,我们将小波转换作为独立通道压缩方法。我们证明,全球平均平均池量相当于递归近似HAAR小波的变换。通过此证明,我们使用小波压缩并将其命名为Wavenet,从而将通道注意力推广。我们的方法的实现可以用几行代码嵌入现有的频道注意方法中。我们使用ImageNet数据集测试我们提出的方法进行图像分类任务。我们的方法的表现优于基线剂量,并实现了最先进的结果。我们的代码实现可在https://github.com/hady1011/wavenet-c上公开获得。

Channel Attention reigns supreme as an effective technique in the field of computer vision. However, the proposed channel attention by SENet suffers from information loss in feature learning caused by the use of Global Average Pooling (GAP) to represent channels as scalars. Thus, designing effective channel attention mechanisms requires finding a solution to enhance features preservation in modeling channel inter-dependencies. In this work, we utilize Wavelet transform compression as a solution to the channel representation problem. We first test wavelet transform as an Auto-Encoder model equipped with conventional channel attention module. Next, we test wavelet transform as a standalone channel compression method. We prove that global average pooling is equivalent to the recursive approximate Haar wavelet transform. With this proof, we generalize channel attention using Wavelet compression and name it WaveNet. Implementation of our method can be embedded within existing channel attention methods with a couple of lines of code. We test our proposed method using ImageNet dataset for image classification task. Our method outperforms the baseline SENet, and achieves the state-of-the-art results. Our code implementation is publicly available at https://github.com/hady1011/WaveNet-C.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源