论文标题

使用条件生成模型了解图像分布的内在鲁棒性

Understanding the Intrinsic Robustness of Image Distributions using Conditional Generative Models

论文作者

Zhang, Xiao, Chen, Jinghui, Gu, Quanquan, Evans, David

论文摘要

从吉尔默等人开始。 (2018年),基于关于基本输入概率空间的不同假设,几项作品证明了对抗性示例的必然性。但是,尚不清楚这些结果是否适用于自然图像分布。在这项工作中,我们假设潜在的数据分布是由某些条件生成模型捕获的,并且证明了一类分类器类别的固有鲁棒性界限,这在Fawzi等人中解决了一个空旷的问题。 (2018)。在最先进的条件生成模型的基础上,我们研究了$ \ ell_2 $扰动下两个共同图像基准的固有鲁棒性,并显示了我们的理论所暗示的稳健性限制与通过当前正式鲁棒模型所实现的对抗性鲁棒性之间存在较大差距。我们所有实验的代码均可在https://github.com/xiaozhanguva/intrinsic-rob上获得。

Starting with Gilmer et al. (2018), several works have demonstrated the inevitability of adversarial examples based on different assumptions about the underlying input probability space. It remains unclear, however, whether these results apply to natural image distributions. In this work, we assume the underlying data distribution is captured by some conditional generative model, and prove intrinsic robustness bounds for a general class of classifiers, which solves an open problem in Fawzi et al. (2018). Building upon the state-of-the-art conditional generative models, we study the intrinsic robustness of two common image benchmarks under $\ell_2$ perturbations, and show the existence of a large gap between the robustness limits implied by our theory and the adversarial robustness achieved by current state-of-the-art robust models. Code for all our experiments is available at https://github.com/xiaozhanguva/Intrinsic-Rob.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源