论文标题
Accelat:通过精确梯度加速对深神经网络的对抗训练的框架
AccelAT: A Framework for Accelerating the Adversarial Training of Deep Neural Networks through Accuracy Gradient
论文作者
论文摘要
利用对抗性训练,以开发针对恶意改变数据的强大深度神经网络(DNN)模型。这些攻击可能对DNN模型产生灾难性影响,但对于人类来说是无法区分的。例如,外部攻击可以修改添加人眼看不见的噪声的图像,但DNN模型将图像错误分类。开发强大DNN模型的一个关键目标是使用快速的学习算法,但也可以提供对不同类型的对抗攻击的模型。特别是对于对抗性训练,在使用不同类型的对抗样本中获得了使用不同的对抗攻击技术产生的许多不同类型的对抗样品,需要长时间的训练时间。 本文旨在加快对抗性训练,以便能够快速开发强大的DNN模型,以防止对抗性攻击。改善训练性能的一般方法是微调的微调,其中学习率是最关键的超参数之一。通过修改其形状(随时间的价值)和训练期间的价值,我们可以比标准训练更快地获得对对抗性攻击的模型。 首先,我们在两个不同的数据集(CIFAR10,CIFAR100)上进行实验,并探索各种技术。然后,该分析被利用以开发一种新型的快速训练方法Accelat,该方法会根据精度梯度自动调整不同时期的学习率。实验显示了相关工作的可比结果,在几个实验中,使用我们的Accelat框架对DNN进行的对抗训练的速度比现有技术快2倍。因此,在安全性和绩效是基于DNN的应用程序中的基本优化目标的时代,我们的发现提高了对抗训练的速度。
Adversarial training is exploited to develop a robust Deep Neural Network (DNN) model against the malicious altered data. These attacks may have catastrophic effects on DNN models but are indistinguishable for a human being. For example, an external attack can modify an image adding noises invisible for a human eye, but a DNN model misclassified the image. A key objective for developing robust DNN models is to use a learning algorithm that is fast but can also give model that is robust against different types of adversarial attacks. Especially for adversarial training, enormously long training times are needed for obtaining high accuracy under many different types of adversarial samples generated using different adversarial attack techniques. This paper aims at accelerating the adversarial training to enable fast development of robust DNN models against adversarial attacks. The general method for improving the training performance is the hyperparameters fine-tuning, where the learning rate is one of the most crucial hyperparameters. By modifying its shape (the value over time) and value during the training, we can obtain a model robust to adversarial attacks faster than standard training. First, we conduct experiments on two different datasets (CIFAR10, CIFAR100), exploring various techniques. Then, this analysis is leveraged to develop a novel fast training methodology, AccelAT, which automatically adjusts the learning rate for different epochs based on the accuracy gradient. The experiments show comparable results with the related works, and in several experiments, the adversarial training of DNNs using our AccelAT framework is conducted up to 2 times faster than the existing techniques. Thus, our findings boost the speed of adversarial training in an era in which security and performance are fundamental optimization objectives in DNN-based applications.