论文标题

深入研究零射门学习中的对抗性鲁棒性

A Deep Dive into Adversarial Robustness in Zero-Shot Learning

论文作者

Yucel, Mehmet Kerim, Cinbis, Ramazan Gokberk, Duygulu, Pinar

论文摘要

由于引入了高度复杂的模型,机器学习(ML)系统在各个领域都引入了重大进步。尽管他们成功了,但已多次表明机器学习模型容易受到无法察觉的扰动,这些扰动可能会严重降低其准确性。到目前为止,现有的研究主要集中在所有班级的监督的模型上。在约束中,零射击学习(ZSL)和广义零射击学习(GZSL)任务本质上缺乏所有类别的监督。在本文中,我们提出了一项研究,旨在评估ZSL和GZSL模型的对抗鲁棒性。我们利用良好的标签嵌入模型,并在多个数据集中遭受一组已建立的对抗攻击和防御。除了在ZSL模型的对抗鲁棒性上创建第一个基准,我们还对需要注意的要点进行分析,以更好地解释ZSL鲁棒性结果。我们希望这些观点与基准一起,将帮助研究人员更好地了解未来的挑战并帮助他们的工作。

Machine learning (ML) systems have introduced significant advances in various fields, due to the introduction of highly complex models. Despite their success, it has been shown multiple times that machine learning models are prone to imperceptible perturbations that can severely degrade their accuracy. So far, existing studies have primarily focused on models where supervision across all classes were available. In constrast, Zero-shot Learning (ZSL) and Generalized Zero-shot Learning (GZSL) tasks inherently lack supervision across all classes. In this paper, we present a study aimed on evaluating the adversarial robustness of ZSL and GZSL models. We leverage the well-established label embedding model and subject it to a set of established adversarial attacks and defenses across multiple datasets. In addition to creating possibly the first benchmark on adversarial robustness of ZSL models, we also present analyses on important points that require attention for better interpretation of ZSL robustness results. We hope these points, along with the benchmark, will help researchers establish a better understanding what challenges lie ahead and help guide their work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源