论文标题
解毒剂:基于注意力的神经网络运行时效率的动态优化
AntiDote: Attention-based Dynamic Optimization for Neural Network Runtime Efficiency
论文作者
论文摘要
卷积神经网络(CNN)以相当大的计算负载为代价实现了出色的认知表现。为了减轻计算负载,开发了许多优化工作,以通过识别和删除微不足道的模型组件(例如重量稀疏性和过滤器修剪)来减少模型冗余。但是,这些作品仅评估模型组件使用内部参数信息的静态意义,而忽略了它们与外部输入的动态相互作用。通过每输入特征激活,模型成分的意义可以动态变化,因此静态方法只能实现亚最佳结果。因此,我们在这项工作中提出了动态的CNN优化框架。基于神经网络注意机制,我们提出了一个综合的动态优化框架,包括(1)测试相通道和列特征图映射修剪,以及(2)通过目标辍学的训练相优化。这样的动态优化框架具有多个好处:(1)首先,它可以通过考虑模型输入交互来准确地识别和积极地删除每输入特征冗余; (2)同时,由于多维灵活性,它可以在各个维度上最大程度地删除特征映射的冗余; (3)训练测试的合作式化有利于动态修剪,即使使用非常高的特征修剪比,也有助于保持模型的准确性。广泛的实验表明,我们的方法可能会带来37.4%至54.5%的拖鞋,而在各个测试网络上的精度下降可忽略不计。
Convolutional Neural Networks (CNNs) achieved great cognitive performance at the expense of considerable computation load. To relieve the computation load, many optimization works are developed to reduce the model redundancy by identifying and removing insignificant model components, such as weight sparsity and filter pruning. However, these works only evaluate model components' static significance with internal parameter information, ignoring their dynamic interaction with external inputs. With per-input feature activation, the model component significance can dynamically change, and thus the static methods can only achieve sub-optimal results. Therefore, we propose a dynamic CNN optimization framework in this work. Based on the neural network attention mechanism, we propose a comprehensive dynamic optimization framework including (1) testing-phase channel and column feature map pruning, as well as (2) training-phase optimization by targeted dropout. Such a dynamic optimization framework has several benefits: (1) First, it can accurately identify and aggressively remove per-input feature redundancy with considering the model-input interaction; (2) Meanwhile, it can maximally remove the feature map redundancy in various dimensions thanks to the multi-dimension flexibility; (3) The training-testing co-optimization favors the dynamic pruning and helps maintain the model accuracy even with very high feature pruning ratio. Extensive experiments show that our method could bring 37.4% to 54.5% FLOPs reduction with negligible accuracy drop on various of test networks.