论文标题
通过及时学习适应域
Domain Adaptation via Prompt Learning
论文作者
论文摘要
无监督的域适应性(UDA)的目的是将从通知的源域中学到的模型适应目标域,仅给出了未标记的样品。当前的UDA方法通过对齐源和目标特征空间来学习域不变特征。这样的比对受到诸如统计差异最小化或对抗训练之类的约束。但是,这些限制可能导致语义特征结构的扭曲和阶级可区分性的丧失。在本文中,我们介绍了一个新颖的提示学习范式,用于UDA,并通过及时学习(DAPL)称为域名。与先前的工作相反,我们的方法利用了预训练的视觉语言模型,仅优化了很少的参数。主要思想是将域信息嵌入到提示中,这是一种由自然语言生成的表示形式,然后用于执行分类。此域信息仅由来自同一域的图像共享,从而根据每个域动态调整分类器。通过采用此范式,我们表明我们的模型不仅在几个跨域基准测试中胜过以前的方法,而且训练且易于实施也非常有效。
Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain, where only unlabeled samples are given. Current UDA approaches learn domain-invariant features by aligning source and target feature spaces. Such alignments are imposed by constraints such as statistical discrepancy minimization or adversarial training. However, these constraints could lead to the distortion of semantic feature structures and loss of class discriminability. In this paper, we introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL). In contrast to prior works, our approach makes use of pre-trained vision-language models and optimizes only very few parameters. The main idea is to embed domain information into prompts, a form of representations generated from natural language, which is then used to perform classification. This domain information is shared only by images from the same domain, thereby dynamically adapting the classifier according to each domain. By adopting this paradigm, we show that our model not only outperforms previous methods on several cross-domain benchmarks but also is very efficient to train and easy to implement.