论文标题
采样如何影响随机神经网络的鲁棒性
How Sampling Impacts the Robustness of Stochastic Neural Networks
论文作者
论文摘要
随机神经网络(SNN)是随机函数,其预测通过在多个实现上平均获得。因此,基于一组样本及其在另一组上的分类来计算基于梯度的对抗示例。在本文中,我们得出了足够的条件,可以使这样的随机预测与给定的基于样本的攻击具有鲁棒性。 This allows us to identify the factors that lead to an increased robustness of SNNs and gives theoretical explanations for: (i) the well known observation, that increasing the amount of samples drawn for the estimation of adversarial examples increases the attack's strength, (ii) why increasing the number of samples during an attack can not fully reduce the effect of stochasticity, (iii) why the sample size during inference does not influence the robustness, and (iv) why a较高的梯度差异和较短的梯度期望值与较高的鲁棒性有关。我们的理论发现对先前提出的攻击强度或模型鲁棒性的方法提供了统一的观点,并通过广泛的经验分析来验证。
Stochastic neural networks (SNNs) are random functions whose predictions are gained by averaging over multiple realizations. Consequently, a gradient-based adversarial example is calculated based on one set of samples and its classification on another set. In this paper, we derive a sufficient condition for such a stochastic prediction to be robust against a given sample-based attack. This allows us to identify the factors that lead to an increased robustness of SNNs and gives theoretical explanations for: (i) the well known observation, that increasing the amount of samples drawn for the estimation of adversarial examples increases the attack's strength, (ii) why increasing the number of samples during an attack can not fully reduce the effect of stochasticity, (iii) why the sample size during inference does not influence the robustness, and (iv) why a higher gradient variance and a shorter expected value of the gradient relates to a higher robustness. Our theoretical findings give a unified view on the mechanisms underlying previously proposed approaches for increasing attack strengths or model robustness and are verified by an extensive empirical analysis.