论文标题

任务敏捷的强大表示学习

Task-Agnostic Robust Representation Learning

论文作者

Nguyen, A. Tuan, Lim, Ser Nam, Torr, Philip

论文摘要

据报道,深度学习模型极易受到小规模但有意选择其投入的扰动的影响。特别是,尽管深层网络在干净的图像上的精度几乎是最佳的精度,但通常会错误地分类图像,但最坏但人性化的扰动(所谓的对抗性示例)。为了解决这个问题,已经进行了大量研究来研究网络的培训程序以改善其鲁棒性。但是,到目前为止,大多数研究都集中在监督学习的情况下。随着自我监督学习方法的日益普及,研究和改善其在下游任务上的表现的鲁棒性也很重要。在本文中,我们以任务无标准的方式研究了鲁棒表示学习的问题。具体而言,我们首先使用其在清洁数据和鲁棒性正常器上的损失和鲁棒性的正常化程序上的任何下游任务上的预测模型的对抗损失(基于学习的表示形式)的上限。此外,正规器是与任务无关的,因此我们建议在表示阶段将其直接最小化,以使下游预测模型更加健壮。广泛的实验表明,与相关基线相比,我们的方法可实现可取的对抗性表现。

It has been reported that deep learning models are extremely vulnerable to small but intentionally chosen perturbations of its input. In particular, a deep network, despite its near-optimal accuracy on the clean images, often mis-classifies an image with a worst-case but humanly imperceptible perturbation (so-called adversarial examples). To tackle this problem, a great amount of research has been done to study the training procedure of a network to improve its robustness. However, most of the research so far has focused on the case of supervised learning. With the increasing popularity of self-supervised learning methods, it is also important to study and improve the robustness of their resulting representation on the downstream tasks. In this paper, we study the problem of robust representation learning with unlabeled data in a task-agnostic manner. Specifically, we first derive an upper bound on the adversarial loss of a prediction model (which is based on the learned representation) on any downstream task, using its loss on the clean data and a robustness regularizer. Moreover, the regularizer is task-independent, thus we propose to minimize it directly during the representation learning phase to make the downstream prediction model more robust. Extensive experiments show that our method achieves preferable adversarial performance compared to relevant baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源