论文标题

通过退火和直接稀疏控制进行网络修剪

Network Pruning via Annealing and Direct Sparsity Control

论文作者

Guo, Yangzi, She, Yiyuan, Barbu, Adrian

论文摘要

如今,人工神经网络(ANN),特别是深卷卷网络非常受欢迎,并已被证明可以成功地为许多视力问题提供非常可靠的解决方案。但是,深层神经网络的使用受到密集的计算和记忆成本的广泛阻碍。在本文中,我们提出了一种新型有效的网络修剪方法,适用于非结构化和结构化的通道级修剪。我们提出的方法通过基于标准和时间表逐渐删除网络参数或过滤器通道来加强稀疏性约束。网络大小在整个迭代中不断下降的诱人事实使其适合于任何未经训练或预训练的网络修剪。由于我们的方法使用$ L_0 $约束代替$ L_1 $罚款,因此它不会在训练参数或过滤器通道中引入任何偏差。此外,$ L_0 $约束使在网络修剪过程中直接指定所需的稀疏水平变得容易。最后,对广泛的合成和真实视觉数据集的实验验证表明,与其他艺术网络修剪方法相比,所提出的方法获得了更好或竞争性的性能。

Artificial neural networks (ANNs) especially deep convolutional networks are very popular these days and have been proved to successfully offer quite reliable solutions to many vision problems. However, the use of deep neural networks is widely impeded by their intensive computational and memory cost. In this paper, we propose a novel efficient network pruning method that is suitable for both non-structured and structured channel-level pruning. Our proposed method tightens a sparsity constraint by gradually removing network parameters or filter channels based on a criterion and a schedule. The attractive fact that the network size keeps dropping throughout the iterations makes it suitable for the pruning of any untrained or pre-trained network. Because our method uses a $L_0$ constraint instead of the $L_1$ penalty, it does not introduce any bias in the training parameters or filter channels. Furthermore, the $L_0$ constraint makes it easy to directly specify the desired sparsity level during the network pruning process. Finally, experimental validation on extensive synthetic and real vision datasets show that the proposed method obtains better or competitive performance compared to other states of art network pruning methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源