论文标题

基于原型的标准化输出调节器适应对抗域的适应

Adversarial Domain Adaptation with Prototype-Based Normalized Output Conditioner

论文作者

Hu, Dapeng, Liang, Jian, Hou, Qibin, Yan, Hanshu, Chen, Yunpeng

论文摘要

在这项工作中,我们试图通过设计简单而紧凑的条件域对抗训练方法来解决无监督的领域适应。我们首先重新审视简单的串联调节策略,其中特征与输出预测作为歧视者的输入。我们发现串联策略遭受了弱调节强度的影响。我们进一步证明,扩大串联预测的规范可以有效地为条件域的比对。因此,我们通过将输出预测标准化以具有相同的特征标准来改善串联调节,并将派生方法称为标准化输出调节器〜(名词)。但是,对域对齐的原始输出预测进行条件,名词遭受了目标域的不准确预测。为此,我们建议在原型空间而不是输出空间中调节跨域特征对齐。将基于新型原型的调节与名词相结合,我们将增强方法称为基于原型的标准化输出调节器〜(代词)。对象识别和语义分割的实验表明,名词可以有效地对齐跨域乃至均优于最先进的域对抗训练方法的多模式结构。加上基于原型的调节,代词进一步提高了UDA多个对象识别基准上名词的适应性性能。

In this work, we attempt to address unsupervised domain adaptation by devising simple and compact conditional domain adversarial training methods. We first revisit the simple concatenation conditioning strategy where features are concatenated with output predictions as the input of the discriminator. We find the concatenation strategy suffers from the weak conditioning strength. We further demonstrate that enlarging the norm of concatenated predictions can effectively energize the conditional domain alignment. Thus we improve concatenation conditioning by normalizing the output predictions to have the same norm of features, and term the derived method as Normalized OutpUt coNditioner~(NOUN). However, conditioning on raw output predictions for domain alignment, NOUN suffers from inaccurate predictions of the target domain. To this end, we propose to condition the cross-domain feature alignment in the prototype space rather than in the output space. Combining the novel prototype-based conditioning with NOUN, we term the enhanced method as PROtotype-based Normalized OutpUt coNditioner~(PRONOUN). Experiments on both object recognition and semantic segmentation show that NOUN can effectively align the multi-modal structures across domains and even outperform state-of-the-art domain adversarial training methods. Together with prototype-based conditioning, PRONOUN further improves the adaptation performance over NOUN on multiple object recognition benchmarks for UDA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源