论文标题
通过共轭伪标签进行测试时间适应
Test-Time Adaptation via Conjugate Pseudo-labels
论文作者
论文摘要
测试时间适应(TTA)是指适应神经网络以进行分配变化,仅在测试时间内访问来自新域的未标记的测试样本。先前的TTA方法优化了无监督的目标,例如帐篷中的模型预测的熵[Wang等,2021],但目前尚不清楚到底是什么使TTA损失良好。在本文中,我们首先提出了一个令人惊讶的现象:如果我们尝试在广泛的功能上衡量最佳的TTA损失,那么我们恢复了一个与Tent所使用的软效果 - 透明镜相似的函数。但是,这只有在我们正在适应的分类器通过跨凝结训练的情况下才能保持;如果通过平方损失训练,则会出现不同的最佳TTA损失。为了解释这一现象,我们通过训练损失的凸结合物分析了TTA。我们表明,在自然条件下,这种(无监督的)共轭功能可以看作是对原始监督损失的局部近似值,实际上,它恢复了元学习发现的最佳损失。这导致了一种通用食谱,该食谱可用于为通用类的任何给定监督培训损失功能找到良好的TTA损失。从经验上讲,我们的方法始终在广泛的基准测试中统治其他基准。当应用于新型损失功能的分类器时,我们的方法尤其令人感兴趣,例如,最近所传播的Polyloss与基于熵的损失有很大的不同。此外,我们表明我们的方法也可以用非常特定的软标签解释为一种自我训练,我们将其称为共轭伪标记。总体而言,我们的方法为更好地理解和改善测试时间适应提供了一个广泛的框架。代码可在https://github.com/locuslab/tta_conjugate上找到。
Test-time adaptation (TTA) refers to adapting neural networks to distribution shifts, with access to only the unlabeled test samples from the new domain at test-time. Prior TTA methods optimize over unsupervised objectives such as the entropy of model predictions in TENT [Wang et al., 2021], but it is unclear what exactly makes a good TTA loss. In this paper, we start by presenting a surprising phenomenon: if we attempt to meta-learn the best possible TTA loss over a wide class of functions, then we recover a function that is remarkably similar to (a temperature-scaled version of) the softmax-entropy employed by TENT. This only holds, however, if the classifier we are adapting is trained via cross-entropy; if trained via squared loss, a different best TTA loss emerges. To explain this phenomenon, we analyze TTA through the lens of the training losses's convex conjugate. We show that under natural conditions, this (unsupervised) conjugate function can be viewed as a good local approximation to the original supervised loss and indeed, it recovers the best losses found by meta-learning. This leads to a generic recipe that can be used to find a good TTA loss for any given supervised training loss function of a general class. Empirically, our approach consistently dominates other baselines over a wide range of benchmarks. Our approach is particularly of interest when applied to classifiers trained with novel loss functions, e.g., the recently-proposed PolyLoss, where it differs substantially from (and outperforms) an entropy-based loss. Further, we show that our approach can also be interpreted as a kind of self-training using a very specific soft label, which we refer to as the conjugate pseudolabel. Overall, our method provides a broad framework for better understanding and improving test-time adaptation. Code is available at https://github.com/locuslab/tta_conjugate.