论文标题
局部稳定的深神经网络的歧管正则化
Manifold Regularization for Locally Stable Deep Neural Networks
论文作者
论文摘要
我们应用了歧管正则化的概念,以开发新的正则化技术来培训本地稳定的深度神经网络。我们的正规化器基于对图laplacian的稀疏性,该图在高维度稀疏时具有很高的可能性,这在深度学习中很常见。从经验上讲,我们的网络在各种扰动模型中表现出稳定性,包括$ \ ell_2 $,$ \ ell_ \ infty $,以及基于Wasserstein的扰动;特别是,我们使用$ \ ell_ \ ell_ \ infty $扰动尺寸$ε= 8/255 $,在同一扰动模型中使用$ \ ell_ \ infty $扰动,在CIFAR-10上实现了40%的对抗精度,并使用$ \ ell_ \ ell_ \ infty $扰动尺寸$ε= 8/255 $。此外,我们的技术是有效的,在额外的两个平行前向通过网络中产生的开销。
We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks. Our regularizers are based on a sparsification of the graph Laplacian which holds with high probability when the data is sparse in high dimensions, as is common in deep learning. Empirically, our networks exhibit stability in a diverse set of perturbation models, including $\ell_2$, $\ell_\infty$, and Wasserstein-based perturbations; in particular, we achieve 40% adversarial accuracy on CIFAR-10 against an adaptive PGD attack using $\ell_\infty$ perturbations of size $ε= 8/255$, and state-of-the-art verified accuracy of 21% in the same perturbation model. Furthermore, our techniques are efficient, incurring overhead on par with two additional parallel forward passes through the network.