论文标题

对抗性扰动在YCBCR颜色空间的Y通道中占上风

Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space

论文作者

Pestana, Camilo, Akhtar, Naveed, Liu, Wei, Glance, David, Mian, Ajmal

论文摘要

深度学习为图像识别提供了最先进的解决方案。但是,深层模型容易受到微妙但显着改变模型预测的图像中的对抗扰动的影响。在白盒攻击中,通常学习这些扰动的用于在RGB图像上操作的深层模型,因此,扰动在RGB颜色空间中平均分布。在本文中,我们表明,对抗性扰动在YCBCR空间的Y通道中占上风。我们的发现是从人类视野和深层模型对形状和纹理而不是颜色的反应迅速的事实中引起的。根据我们的发现,我们提出了针对对抗图像的防御。我们的防御,创造的resupnet仅通过在不需要瓶颈的情况下利用重新采样框架中的重新采样框架来消除Y渠道的扰动。在最后阶段,将未接触的CBCR通道与精制的Y通道结合在一起,以恢复干净的图像。请注意,resupnet是模型不可知的,因为它不会修改DNN结构。 Resupnet是Pytorch的端到端训练的,结果与输入转换类别中的现有防御技术进行了比较。我们的结果表明,我们的方法在防御攻击(例如FGSM,PGD和DDN)之间达到了最佳平衡,并在干净的图像上维持VGG-16,Resnet50和Densenet121的原始精度。我们执行另一项实验,以表明学习对抗性扰动仅对y通道导致相同扰动幅度的愚弄率更高。

Deep learning offers state of the art solutions for image recognition. However, deep models are vulnerable to adversarial perturbations in images that are subtle but significantly change the model's prediction. In a white-box attack, these perturbations are generally learned for deep models that operate on RGB images and, hence, the perturbations are equally distributed in the RGB color space. In this paper, we show that the adversarial perturbations prevail in the Y-channel of the YCbCr space. Our finding is motivated from the fact that the human vision and deep models are more responsive to shape and texture rather than color. Based on our finding, we propose a defense against adversarial images. Our defence, coined ResUpNet, removes perturbations only from the Y-channel by exploiting ResNet features in an upsampling framework without the need for a bottleneck. At the final stage, the untouched CbCr-channels are combined with the refined Y-channel to restore the clean image. Note that ResUpNet is model agnostic as it does not modify the DNN structure. ResUpNet is trained end-to-end in Pytorch and the results are compared to existing defence techniques in the input transformation category. Our results show that our approach achieves the best balance between defence against adversarial attacks such as FGSM, PGD and DDN and maintaining the original accuracies of VGG-16, ResNet50 and DenseNet121 on clean images. We perform another experiment to show that learning adversarial perturbations only for the Y-channel results in higher fooling rates for the same perturbation magnitude.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源