论文标题

对前景和背景的对抗噪声的建筑弹性

Architectural Resilience to Foreground-and-Background Adversarial Noise

论文作者

Cheng, Carl, Hu, Evan

论文摘要

已经对普通图像不可察觉的扰动形式的对抗性发作进行了广泛的研究,并且对于创建的每种新的防御方法,都发现了多种对抗性攻击可以抵消IT。尤其是,近年来,DeepFool和Carlini-Wagner在近年来举例说明的一种流行攻击方式仅依赖于白色盒子的场景,在该场景中,需要完全访问预测模型及其权重。在这项工作中,我们提出了图像的不同模型不可静止的基准扰动,以研究不同网络体系结构的弹性和鲁棒性。结果从经验上确定,大多数类型的卷积神经网络中的深度通常会提高模型对一般攻击的弹性,并且随着模型的更深层次的进步,改进会稳步下降。此外,我们发现,具有跳过连接的残留体系结构和具有相似复杂性的非残基体系结构之间存在显着差异。我们的发现为未来对剩余联系和网络鲁棒性深度的理解提供了方向。

Adversarial attacks in the form of imperceptible perturbations of normal images have been extensively studied, and for every new defense methodology created, multiple adversarial attacks are found to counteract it. In particular, a popular style of attack, exemplified in recent years by DeepFool and Carlini-Wagner, relies solely on white-box scenarios in which full access to the predictive model and its weights are required. In this work, we instead propose distinct model-agnostic benchmark perturbations of images in order to investigate the resilience and robustness of different network architectures. Results empirically determine that increasing depth within most types of Convolutional Neural Networks typically improves model resilience towards general attacks, with improvement steadily decreasing as the model becomes deeper. Additionally, we find that a notable difference in adversarial robustness exists between residual architectures with skip connections and non-residual architectures of similar complexity. Our findings provide direction for future understanding of residual connections and depth on network robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源