论文标题

多源无监督域适应的多项目对齐

Multi-Prompt Alignment for Multi-Source Unsupervised Domain Adaptation

论文作者

Chen, Haoran, Han, Xintong, Wu, Zuxuan, Jiang, Yu-Gang

论文摘要

无监督域适应(UDA)的大多数现有方法都依赖共享网络来提取域不变特征。但是,当面对多个源域时,优化此类网络涉及更新整个网络的参数,使其在计算上既昂贵又具有挑战性,尤其是当与Min-Max目标结合使用时。受到迅速学习的最新进展的启发,我们以计算经济方式适应了下游任务,我们引入了多项目对准(MPA),这是一个简单而有效的多源UDA框架。鉴于源和目标域对,MPA首先训练个人提示,以通过对比损失最大程度地减少域间隙。然后,MPA通过自动编码过程来确定学习的提示,并通过最大化所有重建提示的协议来对齐它们。此外,我们表明,从自动编码过程中获取的结果子空间可以轻松地将其推广到一组简化的目标域,从而使我们的方法更有效地用于实际使用。广泛的实验表明,MPA在三个流行数据集上实现了最先进的结果,其平均准确性在域内令人印象深刻,为54.1%。

Most existing methods for unsupervised domain adaptation (UDA) rely on a shared network to extract domain-invariant features. However, when facing multiple source domains, optimizing such a network involves updating the parameters of the entire network, making it both computationally expensive and challenging, particularly when coupled with min-max objectives. Inspired by recent advances in prompt learning that adapts high-capacity models for downstream tasks in a computationally economic way, we introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA. Given a source and target domain pair, MPA first trains an individual prompt to minimize the domain gap through a contrastive loss. Then, MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts. Moreover, we show that the resulting subspace acquired from the auto-encoding process can easily generalize to a streamlined set of target domains, making our method more efficient for practical usage. Extensive experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源