论文标题

对神经网络的对抗性攻击通过规范的Riemannian叶子

Adversarial attacks on neural networks through canonical Riemannian foliations

论文作者

Tron, Eliot, Couellan, Nicolas, Puechmorel, Stéphane

论文摘要

深度学习模型已知容易受到对抗攻击的影响。因此,对抗性学习已成为至关重要的任务。我们使用Riemannian几何学和叶面理论提出了关于神经网络鲁棒性的新愿景。通过创建新的对抗性攻击来说明这个想法,该攻击考虑了数据空间的曲率。这种新的对抗性攻击称为两步光谱攻击是数据空间中测量线的一部分线性近似。数据空间被视为(堕落的)Riemannian歧管,配备了神经网络的Fisher Information Metric(FIM)的回调。在大多数情况下,该指标仅是半准的,其内核成为研究的核心对象。规范的叶面源自该内核。横向叶子的曲率进行了适当的校正,以获得对测量的两步近似,从而获得了新的有效的对抗性攻击。该方法首先在2D玩具示例上进行说明,以可视化神经网络叶面和相应的攻击。接下来,我们使用Zhao等人提出的技术和最先进的攻击的技术和状态来报告MNIST和CIFAR10数据集的数值结果。 (2019)(OSSA)和Croce等。 (2020)(自动攻击)。结果表明,拟议的攻击在攻击的所有可用预算(攻击规范)上更有效,证实横向神经网络FIM叶叶的曲率在神经网络的稳健性中起着重要作用。这项研究的主要目标和兴趣是在构建对神经网络的有效攻击时对数据空间中发挥作用的几何问题的数学理解。

Deep learning models are known to be vulnerable to adversarial attacks. Adversarial learning is therefore becoming a crucial task. We propose a new vision on neural network robustness using Riemannian geometry and foliation theory. The idea is illustrated by creating a new adversarial attack that takes into account the curvature of the data space. This new adversarial attack, called the two-step spectral attack is a piece-wise linear approximation of a geodesic in the data space. The data space is treated as a (degenerate) Riemannian manifold equipped with the pullback of the Fisher Information Metric (FIM) of the neural network. In most cases, this metric is only semi-definite and its kernel becomes a central object to study. A canonical foliation is derived from this kernel. The curvature of transverse leaves gives the appropriate correction to get a two-step approximation of the geodesic and hence a new efficient adversarial attack. The method is first illustrated on a 2D toy example in order to visualize the neural network foliation and the corresponding attacks. Next, we report numerical results on the MNIST and CIFAR10 datasets with the proposed technique and state of the art attacks presented in Zhao et al. (2019) (OSSA) and Croce et al. (2020) (AutoAttack). The result show that the proposed attack is more efficient at all levels of available budget for the attack (norm of the attack), confirming that the curvature of the transverse neural network FIM foliation plays an important role in the robustness of neural networks. The main objective and interest of this study is to provide a mathematical understanding of the geometrical issues at play in the data space when constructing efficient attacks on neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源