论文标题
关于自然语言处理中的后门攻击和防御的调查
A Survey on Backdoor Attack and Defense in Natural Language Processing
论文作者
论文摘要
深度学习在现实生活中越来越流行,尤其是在自然语言处理(NLP)中。由于数据和计算资源受到限制,用户经常选择培训外包或采用第三方数据和模型。在这种情况下,培训数据和模型会暴露于公众。结果,攻击者可以操纵训练过程,以将一些触发器注入模型,这称为后门攻击。后门攻击非常隐秘,很难被发现,因为它对清洁样品的模型性能几乎没有影响。为了确切的掌握和理解此问题,在本文中,我们对NLP领域的后门攻击和防御进行了全面的审查。此外,我们总结了基准数据集,并指出了设计可靠系统以防御后门攻击的开放问题。
Deep learning is becoming increasingly popular in real-life applications, especially in natural language processing (NLP). Users often choose training outsourcing or adopt third-party data and models due to data and computation resources being limited. In such a situation, training data and models are exposed to the public. As a result, attackers can manipulate the training process to inject some triggers into the model, which is called backdoor attack. Backdoor attack is quite stealthy and difficult to be detected because it has little inferior influence on the model's performance for the clean samples. To get a precise grasp and understanding of this problem, in this paper, we conduct a comprehensive review of backdoor attacks and defenses in the field of NLP. Besides, we summarize benchmark datasets and point out the open issues to design credible systems to defend against backdoor attacks.