论文标题

在深度学习算法中,神经网络反对对抗示例的脆弱性

The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms

论文作者

Zhao, Rui

论文摘要

随着计算机视觉,网络安全性,自然语言处理等领域的进一步发展,深度学习技术逐渐暴露了某些安全风险。现有的深度学习算法无法有效地描述数据的基本特征,从而使算法在面对恶意输入时无法给出正确的结果。基于深度学习面临的当前安全威胁,本文介绍了深度学习中对抗性示例的问题,分类了黑匣子和白盒的现有攻击和防御方法,并将其分类。它简要描述了近年来在不同情况下的一些对抗性示例的应用,比较了几种对抗性示例的防御技术,最后总结了该研究领域的问题以及其未来发展的前景。本文详细介绍了通用的白盒攻击方法,并进一步比较了黑白盒子的攻击之间的相似性和差异。相应地,作者还引入了防御方法,并分析了针对黑白盒子攻击的这些方法的性能。

With further development in the fields of computer vision, network security, natural language processing and so on so forth, deep learning technology gradually exposed certain security risks. The existing deep learning algorithms cannot effectively describe the essential characteristics of data, making the algorithm unable to give the correct result in the face of malicious input. Based on current security threats faced by deep learning, this paper introduces the problem of adversarial examples in deep learning, sorts out the existing attack and defense methods of the black box and white box, and classifies them. It briefly describes the application of some adversarial examples in different scenarios in recent years, compares several defense technologies of adversarial examples, and finally summarizes the problems in this research field and prospects for its future development. This paper introduces the common white box attack methods in detail, and further compares the similarities and differences between the attack of the black and white box. Correspondingly, the author also introduces the defense methods, and analyzes the performance of these methods against the black and white box attack.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源