论文标题

通用对抗性扰动:一项调查

Universal Adversarial Perturbations: A Survey

论文作者

Chaubey, Ashutosh, Agrawal, Nikhil, Barnwal, Kavya, Guliani, Keerat K., Mehta, Pramod

论文摘要

在过去的十年中,深度学习已成为一种有用且有效的工具,可以解决从图像分类到人姿势估计等各种复杂的学习问题,这在使用统计机器学习算法方面挑战。但是,尽管其性能卓越,但深度神经网络仍容易受到对抗扰动的影响,这可能会导致网络的预测变化而不会对输入图像进行可感知的更改,从而在部署此类系统时会造成严重的安全问题。最近的工作表明了通用对抗性扰动的存在,当通过目标模型中添加到数据集中的任何图像时,它们会误解它。这种扰动更为实用,因为在实际攻击过程中进行的计算最少。还提出了几种技术来捍卫神经网络免受这些扰动的影响。在本文中,我们试图就产生普遍扰动的各种数据驱动和数据驱动的方法以及防御这种扰动的措施提供详细的讨论。我们还介绍了这种普遍扰动在各种深度学习任务中的应用。

Over the past decade, Deep Learning has emerged as a useful and efficient tool to solve a wide variety of complex learning problems ranging from image classification to human pose estimation, which is challenging to solve using statistical machine learning algorithms. However, despite their superior performance, deep neural networks are susceptible to adversarial perturbations, which can cause the network's prediction to change without making perceptible changes to the input image, thus creating severe security issues at the time of deployment of such systems. Recent works have shown the existence of Universal Adversarial Perturbations, which, when added to any image in a dataset, misclassifies it when passed through a target model. Such perturbations are more practical to deploy since there is minimal computation done during the actual attack. Several techniques have also been proposed to defend the neural networks against these perturbations. In this paper, we attempt to provide a detailed discussion on the various data-driven and data-independent methods for generating universal perturbations, along with measures to defend against such perturbations. We also cover the applications of such universal perturbations in various deep learning tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源