论文标题
尊重领域关系:领域概括的假设不变性
Respecting Domain Relations: Hypothesis Invariance for Domain Generalization
论文作者
论文摘要
在域的概括中,在训练期间,在训练期间都可以使用多个标记的非独立和非相同分布的源域,而目标域的标签却没有。当前,学习所谓的域不变表示(DIRS)是域泛化的普遍方法。在这项工作中,我们定义了现有作品用概率术语所采用的DIRS,并表明通过学习DIR,对不变性提出了过于严格的要求。尤其是,DIRS的目的是完美地对齐不同域的表示,即它们的输入分布。但是,这是对目标域进行良好概括的必要条件,甚至可能处理有价值的分类信息。我们建议学习所谓的假设不变表示(HIRS),这些表示通过仅使后代对准而不是对齐表示形式来放松不变性假设。我们报告有关公共领域概括数据集的实验结果,以表明学习雇员比学习DIRS更有效。实际上,我们的方法甚至可以使用有关域的先验知识与方法竞争。
In domain generalization, multiple labeled non-independent and non-identically distributed source domains are available during training while neither the data nor the labels of target domains are. Currently, learning so-called domain invariant representations (DIRs) is the prevalent approach to domain generalization. In this work, we define DIRs employed by existing works in probabilistic terms and show that by learning DIRs, overly strict requirements are imposed concerning the invariance. Particularly, DIRs aim to perfectly align representations of different domains, i.e. their input distributions. This is, however, not necessary for good generalization to a target domain and may even dispose of valuable classification information. We propose to learn so-called hypothesis invariant representations (HIRs), which relax the invariance assumptions by merely aligning posteriors, instead of aligning representations. We report experimental results on public domain generalization datasets to show that learning HIRs is more effective than learning DIRs. In fact, our approach can even compete with approaches using prior knowledge about domains.