论文标题
通过神经切线内核进行的几次射击后门攻击
Few-shot Backdoor Attacks via Neural Tangent Kernels
论文作者
论文摘要
在后门攻击中,攻击者将损坏的例子注入训练集中。攻击者的目的是使最终训练的模型在添加预定义的触发器以测试输入时预测攻击者所需的目标标签。这些攻击的核心是攻击成功率与注入损坏的训练例子的数量之间的权衡。我们将这种攻击作为一个新型的双层优化问题:构建强大的毒药例子,以最大程度地提高受过训练的模型的攻击成功率。我们使用神经切线内核来近似被攻击的模型的训练动力学,并自动学习强有力的毒药例子。我们对CIFAR-10和Imagenet的子类进行了实验,并在周期性和贴片触发攻击上进行了WideSnet-34和Convnext体系结构,并表明NTBA设计的中毒例子实现了,例如,攻击成功率为90%,与基线相比注入的毒药数量少了十倍。我们使用内核线性回归分析对NTBA设计的攻击进行了解释。我们进一步证明了过多散热性的深神经网络的脆弱性,这是由神经切线内核的形状所揭示的。
In a backdoor attack, an attacker injects corrupted examples into the training set. The goal of the attacker is to cause the final trained model to predict the attacker's desired target label when a predefined trigger is added to test inputs. Central to these attacks is the trade-off between the success rate of the attack and the number of corrupted training examples injected. We pose this attack as a novel bilevel optimization problem: construct strong poison examples that maximize the attack success rate of the trained model. We use neural tangent kernels to approximate the training dynamics of the model being attacked and automatically learn strong poison examples. We experiment on subclasses of CIFAR-10 and ImageNet with WideResNet-34 and ConvNeXt architectures on periodic and patch trigger attacks and show that NTBA-designed poisoned examples achieve, for example, an attack success rate of 90% with ten times smaller number of poison examples injected compared to the baseline. We provided an interpretation of the NTBA-designed attacks using the analysis of kernel linear regression. We further demonstrate a vulnerability in overparametrized deep neural networks, which is revealed by the shape of the neural tangent kernel.