论文标题

文本后门学习的统一评估:框架和基准测试

A Unified Evaluation of Textual Backdoor Learning: Frameworks and Benchmarks

论文作者

Cui, Ganqu, Yuan, Lifan, He, Bingxiang, Chen, Yangyi, Liu, Zhiyuan, Sun, Maosong

论文摘要

文本后门攻击是对NLP系统的实际威胁。通过在训练阶段注入后门,对手可以通过预定义的触发器控制模型预测。由于已经提出了各种攻击和防御模型,因此进行严格的评估至关重要。但是,我们重点介绍了以前的后门学习评估中的两个问题:(1)实际情况(例如,释放有毒的数据集或模型)之间的差异被忽略了,我们认为每种情况都有其自身的限制和关注因素和关注点,因此需要特定的评估协议; (2)评估指标仅考虑这些攻击是否可以翻转模型对中毒样品的预测并保留对良性样本的表演,但是忽略有毒样本也应该是隐秘和语义上的。为了解决这些问题,我们将现有作品分为三种实际情况,在这些情况下,攻击者分别释放数据集,预培训模型和微调模型,然后讨论其独特的评估方法。关于指标,为了完全评估中毒样本,我们使用语法误差增加和隐形性差异以及有效性的文本相似性。在对框架进行正式化后,我们开发了一个开源工具包开放式式装置,以促进文本后门学习的实现和评估。使用此工具包,我们在建议的范式下进行基准攻击和防御模型进行广泛的实验。为了促进针对中毒数据集的毫无疑问的防御能力,我们进一步提出了Cube,这是一个简单而强大的基于聚类的防御基线。我们希望我们的框架和基准可以作为未来模型开发和评估的基石。

Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a backdoor in the training phase, the adversary could control model predictions via predefined triggers. As various attack and defense models have been proposed, it is of great significance to perform rigorous evaluations. However, we highlight two issues in previous backdoor learning evaluations: (1) The differences between real-world scenarios (e.g. releasing poisoned datasets or models) are neglected, and we argue that each scenario has its own constraints and concerns, thus requires specific evaluation protocols; (2) The evaluation metrics only consider whether the attacks could flip the models' predictions on poisoned samples and retain performances on benign samples, but ignore that poisoned samples should also be stealthy and semantic-preserving. To address these issues, we categorize existing works into three practical scenarios in which attackers release datasets, pre-trained models, and fine-tuned models respectively, then discuss their unique evaluation methodologies. On metrics, to completely evaluate poisoned samples, we use grammar error increase and perplexity difference for stealthiness, along with text similarity for validity. After formalizing the frameworks, we develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning. With this toolkit, we perform extensive experiments to benchmark attack and defense models under the suggested paradigm. To facilitate the underexplored defenses against poisoned datasets, we further propose CUBE, a simple yet strong clustering-based defense baseline. We hope that our frameworks and benchmarks could serve as the cornerstones for future model development and evaluations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源