论文标题

作为有效的正规化程序,域对抗微调

Domain Adversarial Fine-Tuning as an Effective Regularizer

论文作者

Vernikos, Giorgos, Margatina, Katerina, Chronopoulou, Alexandra, Androutsopoulos, Ion

论文摘要

在自然语言处理(NLP)中,最近已证明已转移到下游任务的经过验证的语言模型(LMS)可以实现最新的结果。但是,标准的微调可以降低预审进期间捕获的通用域表示。为了解决这个问题,我们引入了一种新的正则化技术。作为有效的正规化器,域对抗微调。具体而言,我们与对抗性目标补充了微调过程中使用的特定任务损失。这个额外的损失术语与对抗性分类器有关,该分类器的目的是区分内域和室外文本表示。内域是指手头任务的标记数据集,而室外则是指来自其他域中的未标记数据。从直觉上,对抗分类器充当正规器,可防止模型过度拟合到特定于任务的域。各种自然语言理解任务的经验结果表明,与标准微调相比,导致性能提高后。

In Natural Language Processing (NLP), pretrained language models (LMs) that are transferred to downstream tasks have been recently shown to achieve state-of-the-art results. However, standard fine-tuning can degrade the general-domain representations captured during pretraining. To address this issue, we introduce a new regularization technique, AFTER; domain Adversarial Fine-Tuning as an Effective Regularizer. Specifically, we complement the task-specific loss used during fine-tuning with an adversarial objective. This additional loss term is related to an adversarial classifier, that aims to discriminate between in-domain and out-of-domain text representations. In-domain refers to the labeled dataset of the task at hand while out-of-domain refers to unlabeled data from a different domain. Intuitively, the adversarial classifier acts as a regularizer which prevents the model from overfitting to the task-specific domain. Empirical results on various natural language understanding tasks show that AFTER leads to improved performance compared to standard fine-tuning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源