论文标题
通过固有的结构参数确保深度尖峰神经网络免受对抗攻击
Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters
论文作者
论文摘要
由于其实际解决问题的能力,深度学习(DL)算法已获得了知名度。但是,他们遭受了严重的诚信威胁,即他们易受对抗攻击的脆弱性。为了寻求DL值得信赖,最近的作品声称,尖峰神经网络(SNN)对这些攻击具有固有的鲁棒性,而没有考虑其结构上的尖峰参数的可变性。本文通过内部结构参数探讨了SNN的安全性增强。具体而言,我们研究了具有不同值的神经元触发电压阈值和时间窗口边界的对抗攻击的SNN鲁棒性。我们在强烈的白色盒子环境中,具有不同的噪声预算和可变的尖峰参数下的不同对抗性攻击下彻底研究SNNS安全性。我们的结果表明,结构参数对SNN的安全性有很大的影响,并且可以与传统的非尖刺DL系统相比,可以达到有希望的甜点以高85%的鲁棒性设计可信赖的SNN。据我们所知,这是第一项研究结构参数对SNN鲁棒性对对抗攻击的影响的工作。拟议的贡献和实验框架可在线提供可再现的研究。
Deep Learning (DL) algorithms have gained popularity owing to their practical problem-solving capacity. However, they suffer from a serious integrity threat, i.e., their vulnerability to adversarial attacks. In the quest for DL trustworthiness, recent works claimed the inherent robustness of Spiking Neural Networks (SNNs) to these attacks, without considering the variability in their structural spiking parameters. This paper explores the security enhancement of SNNs through internal structural parameters. Specifically, we investigate the SNNs robustness to adversarial attacks with different values of the neuron's firing voltage thresholds and time window boundaries. We thoroughly study SNNs security under different adversarial attacks in the strong white-box setting, with different noise budgets and under variable spiking parameters. Our results show a significant impact of the structural parameters on the SNNs' security, and promising sweet spots can be reached to design trustworthy SNNs with 85% higher robustness than a traditional non-spiking DL system. To the best of our knowledge, this is the first work that investigates the impact of structural parameters on SNNs robustness to adversarial attacks. The proposed contributions and the experimental framework is available online to the community for reproducible research.