论文标题
熵最小化与域适应性的多样性最大化
Entropy Minimization vs. Diversity Maximization for Domain Adaptation
论文作者
论文摘要
熵最小化已被广泛用于无监督的域适应性(UDA)。但是,现有的作品表明,熵最小化仅可能导致琐碎的解决方案。在本文中,我们建议通过进一步引入多样性最大化来避免琐碎的解决方案。为了达到UDA的最低目标风险,我们表明应与熵最小化的多样性最大化,这可以通过不受欢迎的方式使用深层嵌入式验证来精心控制。所提出的最小透气多样性最大化(MEDM)可以通过随机梯度下降直接实施,而无需使用对抗性学习。经验证据表明,MEDM在四个流行领域适应数据集上的最新方法优于最先进的方法。
Entropy minimization has been widely used in unsupervised domain adaptation (UDA). However, existing works reveal that entropy minimization only may result into collapsed trivial solutions. In this paper, we propose to avoid trivial solutions by further introducing diversity maximization. In order to achieve the possible minimum target risk for UDA, we show that diversity maximization should be elaborately balanced with entropy minimization, the degree of which can be finely controlled with the use of deep embedded validation in an unsupervised manner. The proposed minimal-entropy diversity maximization (MEDM) can be directly implemented by stochastic gradient descent without use of adversarial learning. Empirical evidence demonstrates that MEDM outperforms the state-of-the-art methods on four popular domain adaptation datasets.