论文标题

域自适应合奏学习

Domain Adaptive Ensemble Learning

论文作者

Zhou, Kaiyang, Yang, Yongxin, Qiao, Yu, Xiang, Tao

论文摘要

在两个设置下研究了从多个源域将深神经网络从多个源域进行概括的问题:当无标记的目标数据可用时,它是一个多源的无监督域适应性(UDA)问题,否则域概括(DG)问题。我们提出了一个统一的框架,称为域自适应集合学习(DAEL),以解决这两个问题。 DAEL模型由共享域的CNN特征提取器组成,并且多个分类器头均经过训练,专门研究特定源域。每个这样的分类器都是其自己领域的专家,也是对他人的非专家。 DAEL的目的是协作学习这些专家,以便在组建合奏时,他们可以利用彼此的互补信息来对看不见的目标领域更有效。为此,每个源域依次用作伪target域,其专家为从其他来源学到的非专家集团提供监督信号。对于不存在真正专家的UDA设置下的未标记的目标数据,Dael使用伪标签来监督合奏学习。在三个多源UDA数据集和两个DG数据集上进行的大量实验表明,DAEL通常通过大量利润来改善这两个问题的最新状态。该代码以\ url {https://github.com/kaiyangzhou/dassl.pytorch}发布。

The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem. We propose a unified framework termed domain adaptive ensemble learning (DAEL) to address both problems. A DAEL model is composed of a CNN feature extractor shared across domains and multiple classifier heads each trained to specialize in a particular source domain. Each such classifier is an expert to its own domain and a non-expert to others. DAEL aims to learn these experts collaboratively so that when forming an ensemble, they can leverage complementary information from each other to be more effective for an unseen target domain. To this end, each source domain is used in turn as a pseudo-target-domain with its own expert providing supervisory signal to the ensemble of non-experts learned from the other sources. For unlabeled target data under the UDA setting where real expert does not exist, DAEL uses pseudo-label to supervise the ensemble learning. Extensive experiments on three multi-source UDA datasets and two DG datasets show that DAEL improves the state of the art on both problems, often by significant margins. The code is released at \url{https://github.com/KaiyangZhou/Dassl.pytorch}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源