论文标题

对领域概括和适应的预训练的广泛研究

A Broad Study of Pre-training for Domain Generalization and Adaptation

论文作者

Kim, Donghyun, Wang, Kaihong, Sclaroff, Stan, Saenko, Kate

论文摘要

深层模型必须学习强大而可转移的表示形式,以便在新域上表现良好。尽管已经提出了域转移方法(例如,域的适应性,域的概括)来学习跨域的可转移表示,但通常将它们应用于在Imagenet上预先训练的重置骨架。因此,现有作品几乎不关注预训练对域转移任务的影响。在本文中,我们对领域适应和泛化的预训练进行了广泛的研究和深入分析,即:网络体系结构,大小,训练损失和数据集。我们观察到,只需使用最先进的骨干链优于现有的最新域适应基线,并将新的基本线设置为Office-Home和Domainnet的新基准,并提高了10.7 \%\%和5.5 \%。我们希望这项工作可以为未来的领域转移研究提供更多见解。

Deep models must learn robust and transferable representations in order to perform well on new domains. While domain transfer methods (e.g., domain adaptation, domain generalization) have been proposed to learn transferable representations across domains, they are typically applied to ResNet backbones pre-trained on ImageNet. Thus, existing works pay little attention to the effects of pre-training on domain transfer tasks. In this paper, we provide a broad study and in-depth analysis of pre-training for domain adaptation and generalization, namely: network architectures, size, pre-training loss, and datasets. We observe that simply using a state-of-the-art backbone outperforms existing state-of-the-art domain adaptation baselines and set new baselines on Office-Home and DomainNet improving by 10.7\% and 5.5\%. We hope that this work can provide more insights for future domain transfer research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源