论文标题

学习校准了域转移的不确定性:一种分配强大的学习方法

Learning Calibrated Uncertainties for Domain Shift: A Distributionally Robust Learning Approach

论文作者

Wang, Haoxuan, Yu, Zhiding, Yue, Yisong, Anandkumar, Anima, Liu, Anqi, Yan, Junchi

论文摘要

我们提出了一个在域移位下学习校准不确定性的框架,其中源(训练)分布与目标(测试)分布不同。我们通过可区分的密度比估计器检测到此类域的变化,并将其与任务网络一起训练,从而构成了有关域移动的调整后的软磁性预测形式。特别是,密度比估计反映了目标(测试)样本与源(训练)分布的接近度。我们采用它来调整任务网络中预测的不确定性。使用密度比的这种想法是基于分布鲁棒学习(DRL)框架,该框架是通过对抗风险最小化来解释域的转移。我们表明,我们提出的方法会产生受校准的不确定性,从而使下游任务受益,例如无监督的域适应性(UDA)和半监督学习(SSL)。在这些任务上,诸如自我训练和FixMatch之类的方法使用不确定性选择自信的伪标签进行重新训练。我们的实验表明,DRL的引入导致跨域性能的显着改善。我们还表明,估计的密度比与人类选择频率保持一致,这表明与人类感知不确定性的代理存在正相关。

We propose a framework for learning calibrated uncertainties under domain shifts, where the source (training) distribution differs from the target (test) distribution. We detect such domain shifts via a differentiable density ratio estimator and train it together with the task network, composing an adjusted softmax predictive form concerning domain shift. In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution. We employ it to adjust the uncertainty of prediction in the task network. This idea of using the density ratio is based on the distributionally robust learning (DRL) framework, which accounts for the domain shift by adversarial risk minimization. We show that our proposed method generates calibrated uncertainties that benefit downstream tasks, such as unsupervised domain adaptation (UDA) and semi-supervised learning (SSL). On these tasks, methods like self-training and FixMatch use uncertainties to select confident pseudo-labels for re-training. Our experiments show that the introduction of DRL leads to significant improvements in cross-domain performance. We also show that the estimated density ratios align with human selection frequencies, suggesting a positive correlation with a proxy of human perceived uncertainties.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源