论文标题

通过多视图一致性学习可转移的对抗性强大表示

Learning Transferable Adversarial Robust Representations via Multi-view Consistency

论文作者

Kim, Minseon, Ha, Hyeonjeong, Lee, Dong Bok, Hwang, Sung Ju

论文摘要

尽管在几乎没有学习问题上取得了成功,但大多数元学习的模型仅着重于在干净的示例上实现良好的性能,因此在给予对抗性扰动的样本时很容易分解。尽管最近的一些作品表明,对抗性学习和元学习的结合可以增强元学习者对对抗性攻击的鲁棒性,但它们无法实现可概括的对抗性鲁棒性,以实现看不见的领域和任务,这是元学习的最终目标。为了应对这一挑战,我们提出了一个新颖的元媒体多视图表示框架学习框架。具体而言,我们通过首先使用它们更新编码器参数,并进一步施加新颖的无标签对抗性攻击,以最大程度地提高其差异,从而介绍了相同数据实例的两个不同增强样本的差异。然后,我们最大程度地提高了视图上的一致性,以学习跨域和任务的可转移稳健表示。通过对多个基准测试的实验验证,我们证明了框架对从看不见的域中少的学习任务的有效性,从而实现了超过10 \%的稳健精度,以提高对以前的对抗性元学习基线。

Despite the success on few-shot learning problems, most meta-learned models only focus on achieving good performance on clean examples and thus easily break down when given adversarially perturbed samples. While some recent works have shown that a combination of adversarial learning and meta-learning could enhance the robustness of a meta-learner against adversarial attacks, they fail to achieve generalizable adversarial robustness to unseen domains and tasks, which is the ultimate goal of meta-learning. To address this challenge, we propose a novel meta-adversarial multi-view representation learning framework with dual encoders. Specifically, we introduce the discrepancy across the two differently augmented samples of the same data instance by first updating the encoder parameters with them and further imposing a novel label-free adversarial attack to maximize their discrepancy. Then, we maximize the consistency across the views to learn transferable robust representations across domains and tasks. Through experimental validation on multiple benchmarks, we demonstrate the effectiveness of our framework on few-shot learning tasks from unseen domains, achieving over 10\% robust accuracy improvements against previous adversarial meta-learning baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源