论文标题
逐步防御对抗性攻击,以深入学习作为物联网的服务
Progressive Defense Against Adversarial Attacks for Deep Learning as a Service in Internet of Things
论文作者
论文摘要
如今,深度学习作为服务可以部署在物联网(IoT)中,以提供智能服务和传感器数据处理。但是,最近的研究表明,通过在输入中添加相对较小但对抗性的扰动(例如,输入图像中的像素突变),可以轻易地误导一些深层神经网络(DNN)。捍卫DNN抵抗这些攻击的一个挑战是有效地识别和滤除对抗性像素。具有良好鲁棒性的最先进的防御策略通常需要对特定攻击进行额外的模型培训。为了降低计算成本而不会损失一般性,我们提出了一种防御策略,称为渐进的防御对抗性攻击(PDAAA),以有效地有效地滤除了对抗性像素突变,这可能会误导神经网络朝着错误的输出,而无需对攻击类型而没有A-Priori知识。我们评估了针对两个著名数据集的各种攻击方法的渐进辩护策略。结果表明,它的表现胜过最先进的,同时将模型培训的成本平均降低了50%。
Nowadays, Deep Learning as a service can be deployed in Internet of Things (IoT) to provide smart services and sensor data processing. However, recent research has revealed that some Deep Neural Networks (DNN) can be easily misled by adding relatively small but adversarial perturbations to the input (e.g., pixel mutation in input images). One challenge in defending DNN against these attacks is to efficiently identifying and filtering out the adversarial pixels. The state-of-the-art defense strategies with good robustness often require additional model training for specific attacks. To reduce the computational cost without loss of generality, we present a defense strategy called a progressive defense against adversarial attacks (PDAAA) for efficiently and effectively filtering out the adversarial pixel mutations, which could mislead the neural network towards erroneous outputs, without a-priori knowledge about the attack type. We evaluated our progressive defense strategy against various attack methods on two well-known datasets. The result shows it outperforms the state-of-the-art while reducing the cost of model training by 50% on average.