论文标题
自然色傻瓜:促进黑框无限攻击
Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks
论文作者
论文摘要
不受限制的色彩攻击操纵了图像的语义上有意义的色彩,在欺骗人眼和深层神经网络方面表现出了他们的隐形和成功。但是,目前的作品通常牺牲了不受控制的设置的灵活性,以确保对抗性例子的自然性。结果,这些方法的黑盒攻击性能受到限制。为了提高对抗性示例而不会损害图像质量的可转移性,我们提出了一种新颖的自然色傻瓜(NCF),该傻瓜以从公开可用的数据集中采样的逼真的色彩分布为指导,并通过我们的邻里搜索和初始化重置进行了优化。通过进行广泛的实验和可视化,我们令人信服地证明了我们提出的方法的有效性。值得注意的是,平均而言,结果表明,我们的NCF可以胜过最先进的方法15.0%$ \ sim $ 32.9%,用于愚弄正常训练的型号和10.0%的$ \ sim $ \ sim $ 25.3%,以逃避防御方法。我们的代码可在https://github.com/ylhz/natural-color-fool上找到。
Unrestricted color attacks, which manipulate semantically meaningful color of an image, have shown their stealthiness and success in fooling both human eyes and deep neural networks. However, current works usually sacrifice the flexibility of the uncontrolled setting to ensure the naturalness of adversarial examples. As a result, the black-box attack performance of these methods is limited. To boost transferability of adversarial examples without damaging image quality, we propose a novel Natural Color Fool (NCF) which is guided by realistic color distributions sampled from a publicly available dataset and optimized by our neighborhood search and initialization reset. By conducting extensive experiments and visualizations, we convincingly demonstrate the effectiveness of our proposed method. Notably, on average, results show that our NCF can outperform state-of-the-art approaches by 15.0%$\sim$32.9% for fooling normally trained models and 10.0%$\sim$25.3% for evading defense methods. Our code is available at https://github.com/ylhz/Natural-Color-Fool.