论文标题

修剪可以改善神经网络的认证鲁棒性吗?

Can pruning improve certified robustness of neural networks?

论文作者

Li, Zhangheng, Chen, Tianlong, Li, Linyi, Li, Bo, Wang, Zhangyang

论文摘要

随着深度学习的快速发展,神经网络的大小变得越来越大,以使培训和推理经常压倒硬件资源。鉴于神经网络通常被过度参数化,减少这种计算开销的一种有效方法是神经网络修剪,通过删除训练有素的神经网络中的冗余参数。最近观察到,修剪不仅可以减少计算开销,而且可以改善深神经网络(NNS)的经验鲁棒性,这可能是由于消除了伪造的相关性,同时保留了预测精度。本文首次证明,修剪通常可以在完整的验证设置下改善基于RELU的NN的认证鲁棒性。使用流行的分支机构(BAB)框架,我们发现修剪可以通过减轻线性放松和子域分裂问题来增强认证鲁棒性验证的估计结合紧密度。我们通过现成的修剪方法在经验上验证了我们的发现,并进一步介绍了一种基于稳定性的新修剪方法,该方法量身定制了用于减少神经元不稳定性的新方法,该方法在增强认证的鲁棒性方面优于现有的修剪方法。我们的实验表明,通过适当修剪NN,在标准培训下,其认证准确性最多可提高8.2%,在CIFAR10数据集中的对抗培训下最多可提高24.5%。我们还观察到存在经过认证的彩票票,这些彩票可以符合不同数据集的原始密集型号的标准和认证的稳健精度。我们的发现提供了一个新的角度来研究稀疏性与鲁棒性之间的有趣相互作用,即通过神经元稳定性来解释稀疏性和认证鲁棒性的相互作用。代码可在以下网址提供:https://github.com/vita-group/certifiedpruning。

With the rapid development of deep learning, the sizes of neural networks become larger and larger so that the training and inference often overwhelm the hardware resources. Given the fact that neural networks are often over-parameterized, one effective way to reduce such computational overhead is neural network pruning, by removing redundant parameters from trained neural networks. It has been recently observed that pruning can not only reduce computational overhead but also can improve empirical robustness of deep neural networks (NNs), potentially owing to removing spurious correlations while preserving the predictive accuracies. This paper for the first time demonstrates that pruning can generally improve certified robustness for ReLU-based NNs under the complete verification setting. Using the popular Branch-and-Bound (BaB) framework, we find that pruning can enhance the estimated bound tightness of certified robustness verification, by alleviating linear relaxation and sub-domain split problems. We empirically verify our findings with off-the-shelf pruning methods and further present a new stability-based pruning method tailored for reducing neuron instability, that outperforms existing pruning methods in enhancing certified robustness. Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training, and up to 24.5% under adversarial training on the CIFAR10 dataset. We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models across different datasets. Our findings offer a new angle to study the intriguing interaction between sparsity and robustness, i.e. interpreting the interaction of sparsity and certified robustness via neuron stability. Codes are available at: https://github.com/VITA-Group/CertifiedPruning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源