论文标题
输入感知动态后门攻击
Input-Aware Dynamic Backdoor Attack
论文作者
论文摘要
近年来,神经后门攻击被认为是对深度学习系统的潜在安全威胁。这样的系统在实现清洁数据上最新的性能的同时,在具有预定义触发器的输入上表现异常。但是,当前的后门技术依赖于统一的触发模式,这些触发模式很容易被当前的防御方法检测到。在这项工作中,我们提出了一种新型的后门攻击技术,其中触发因输入而异。为了实现这一目标,我们实施了由多样性损失驱动的输入感知触发器发生器。应用新颖的跨触发测试来强制执行触发性非usbablity,从而使后门验证不可能。实验表明,我们的方法在各种攻击方案以及多个数据集中有效。我们进一步证明,我们的后门可以绕过最先进的防御方法。与著名的神经网络检查员的分析再次证明了拟议攻击的隐秘性。我们的代码可在https://github.com/vinairesearch/input-aware-backdoor-attack-rease上公开获取。
In recent years, neural backdoor attack has been considered to be a potential security threat to deep learning systems. Such systems, while achieving the state-of-the-art performance on clean data, perform abnormally on inputs with predefined triggers. Current backdoor techniques, however, rely on uniform trigger patterns, which are easily detected and mitigated by current defense methods. In this work, we propose a novel backdoor attack technique in which the triggers vary from input to input. To achieve this goal, we implement an input-aware trigger generator driven by diversity loss. A novel cross-trigger test is applied to enforce trigger nonreusablity, making backdoor verification impossible. Experiments show that our method is efficient in various attack scenarios as well as multiple datasets. We further demonstrate that our backdoor can bypass the state of the art defense methods. An analysis with a famous neural network inspector again proves the stealthiness of the proposed attack. Our code is publicly available at https://github.com/VinAIResearch/input-aware-backdoor-attack-release.