论文标题
在图像分段中,多类ASMA与针对性的PGD攻击
Multiclass ASMA vs Targeted PGD Attack in Image Segmentation
论文作者
论文摘要
深度学习网络已在各种应用中表现出高性能,例如图像分类,语音识别和自然语言处理。但是,存在使用对抗攻击所利用的主要漏洞。对抗性攻击通过稍微稍微更改输入图像,从而使其对肉眼几乎无法检测到图像,但会导致网络的分类非常不同。 This paper explores the projected gradient descent (PGD) attack and the Adaptive Mask Segmentation Attack (ASMA) on the image segmentation DeepLabV3 model using two types of architectures: MobileNetV3 and ResNet50, It was found that PGD was very consistent in changing the segmentation to be its target while the generalization of ASMA to a multiclass target was not as effective.然而,这种攻击的存在使所有图像分类深度学习网络处于剥削的危险之中。
Deep learning networks have demonstrated high performance in a large variety of applications, such as image classification, speech recognition, and natural language processing. However, there exists a major vulnerability exploited by the use of adversarial attacks. An adversarial attack imputes images by altering the input image very slightly, making it nearly undetectable to the naked eye, but results in a very different classification by the network. This paper explores the projected gradient descent (PGD) attack and the Adaptive Mask Segmentation Attack (ASMA) on the image segmentation DeepLabV3 model using two types of architectures: MobileNetV3 and ResNet50, It was found that PGD was very consistent in changing the segmentation to be its target while the generalization of ASMA to a multiclass target was not as effective. The existence of such attack however puts all of image classification deep learning networks in danger of exploitation.