论文标题
为自我监督的代表学习学习
Whitening for Self-Supervised Representation Learning
论文作者
论文摘要
当前的大多数自我监督表示学习(SSL)方法基于对比度损失和实例歧视任务,其中相同图像实例的增强版本(“呈阳性”)与从其他图像中提取的实例(“否定”)对比。为了学习有效,应将许多负面因素与一个正面对,这在计算上是要求的。在本文中,我们建议SSL的不同方向和新的损失函数,该功能基于潜在空间特征的美白。美白操作对批处理样品具有“散射”影响,避免了所有样品表示崩溃到单个点的退化解决方案。我们的解决方案不需要不对称的网络,并且在概念上很简单。此外,由于不需要负面因素,我们可以从同一图像实例中提取多个正面。该方法的源代码和所有实验的源代码均可在以下网址提供:https://github.com/htdt/self-superised。
Most of the current self-supervised representation learning (SSL) methods are based on the contrastive loss and the instance-discrimination task, where augmented versions of the same image instance ("positives") are contrasted with instances extracted from other images ("negatives"). For the learning to be effective, many negatives should be compared with a positive pair, which is computationally demanding. In this paper, we propose a different direction and a new loss function for SSL, which is based on the whitening of the latent-space features. The whitening operation has a "scattering" effect on the batch samples, avoiding degenerate solutions where all the sample representations collapse to a single point. Our solution does not require asymmetric networks and it is conceptually simple. Moreover, since negatives are not needed, we can extract multiple positive pairs from the same image instance. The source code of the method and of all the experiments is available at: https://github.com/htdt/self-supervised.