论文标题
基础追求的结构扩展:保证对抗性鲁棒
Structural Extensions of Basis Pursuit: Guarantees on Adversarial Robustness
论文作者
论文摘要
虽然深度神经网络对对抗性噪声敏感,但使用基础追踪方法(BP)方法进行稀疏编码对此类攻击,包括其多层扩展。我们证明,BP的稳定性定理符合以下概括:(i)正则化过程可以分为不同权重的不连接组,(ii)神经元或完整层可能形成组,并且(iii)常规化器采取$ \ ell_1 $规范的各种广义形式。该结果为Cazenavette等人的建筑概括提供了证明。 (2021),包括(iv)完整体系结构作为浅稀疏编码网络的近似。由于此近似值,我们定居于实验浅网络,并研究了它们在合成数据集和MNIST上的迭代快速梯度符号方法的鲁棒性。我们根据组的$ \ ell_2 $规范介绍了分类,并以数字表明它可以准确并提供相当大的加速。在这个家庭中,线性变压器显示出最佳性能。基于理论结果和数值模拟,我们重点介绍了可能进一步提高性能的数值问题。
While deep neural networks are sensitive to adversarial noise, sparse coding using the Basis Pursuit (BP) method is robust against such attacks, including its multi-layer extensions. We prove that the stability theorem of BP holds upon the following generalizations: (i) the regularization procedure can be separated into disjoint groups with different weights, (ii) neurons or full layers may form groups, and (iii) the regularizer takes various generalized forms of the $\ell_1$ norm. This result provides the proof for the architectural generalizations of Cazenavette et al. (2021), including (iv) an approximation of the complete architecture as a shallow sparse coding network. Due to this approximation, we settled to experimenting with shallow networks and studied their robustness against the Iterative Fast Gradient Sign Method on a synthetic dataset and MNIST. We introduce classification based on the $\ell_2$ norms of the groups and show numerically that it can be accurate and offers considerable speedups. In this family, linear transformer shows the best performance. Based on the theoretical results and the numerical simulations, we highlight numerical matters that may improve performance further.