论文标题
BURS的联合分布对齐和动态边缘的无监督域适应性
Bures Joint Distribution Alignment with Dynamic Margin for Unsupervised Domain Adaptation
论文作者
论文摘要
无监督的域适应性(UDA)是转移学习的重要任务之一,它提供了一种有效的方法来减轻标记的源域和未标记的目标域之间的分布变化。先前的工作主要集中于对齐边缘分布或估计的类条件分布。但是,该功能和标签之间的关节依赖性对于适应任务至关重要,并且没有完全利用。为了解决这个问题,我们提出了Bures联合分布比对(BJDA)算法,该算法基于无限二维内核空间中的最佳传输理论直接建模关节分布移动。具体而言,我们提出了一个新颖的对准损失项,该项最小化了关节分布之间的内核布尔斯 - 韦斯汀距离。从技术上讲,BJDA可以有效捕获数据基础的非线性结构。此外,我们在对比度学习阶段引入了动态余量,以灵活地表征类可分离性并提高表示形式的歧视能力。它还避免了交叉验证程序,以确定基于传统三重态损失方法中的边缘参数。广泛的实验表明,BJDA对UDA任务非常有效,因为它在大多数实验环境中都优于最先进的算法。特别是,BJDA在Adaptiope上提高了UDA任务的平均准确性2.8%,Office-Caltech10上的1.4%和ImageClef-DA的平均准确性提高了1.4%。
Unsupervised domain adaptation (UDA) is one of the prominent tasks of transfer learning, and it provides an effective approach to mitigate the distribution shift between the labeled source domain and the unlabeled target domain. Prior works mainly focus on aligning the marginal distributions or the estimated class-conditional distributions. However, the joint dependency among the feature and the label is crucial for the adaptation task and is not fully exploited. To address this problem, we propose the Bures Joint Distribution Alignment (BJDA) algorithm which directly models the joint distribution shift based on the optimal transport theory in the infinite-dimensional kernel spaces. Specifically, we propose a novel alignment loss term that minimizes the kernel Bures-Wasserstein distance between the joint distributions. Technically, BJDA can effectively capture the nonlinear structures underlying the data. In addition, we introduce a dynamic margin in contrastive learning phase to flexibly characterize the class separability and improve the discriminative ability of representations. It also avoids the cross-validation procedure to determine the margin parameter in traditional triplet loss based methods. Extensive experiments show that BJDA is very effective for the UDA tasks, as it outperforms state-of-the-art algorithms in most experimental settings. In particular, BJDA improves the average accuracy of UDA tasks by 2.8% on Adaptiope, 1.4% on Office-Caltech10, and 1.1% on ImageCLEF-DA.