论文标题

CACO:通过合作对比对比学习,正面和负样本都是可以直接学习的

CaCo: Both Positive and Negative Samples are Directly Learnable via Cooperative-adversarial Contrastive Learning

论文作者

Wang, Xiao, Huang, Yuhang, Zeng, Dan, Qi, Guo-Jun

论文摘要

作为一种代表性的自我监督方法,对比学习在无监督的表示培训方面取得了巨大的成功。它通过将阳性样品与负面样本区分开来训练编码器。这些正面和负面样本在定义学习歧视编码器的目标方面起着关键作用,避免学习琐碎的特征。虽然现有方法可以启发性地选择这些样本,但我们提出了一种原则性的方法,即正与编码器直接学习正面和负面样本。我们表明,通过分别使对比度损失最小化和最大化对比度损失,可以合作和对抗性样本可以合作和对抗。相对于编码器,这产生了合作的积极和对抗性负面因素,该编码器的更新是为了不断跟踪在迷你批次上的查询锚的学会表示。所提出的方法在200和800个时期的TOP-1准确性中分别达到71.3%和75.3%,在Imagenet1k上的预训练Resnet-50骨架上,没有诸如多杂音或更强的增强的技巧。使用多工厂,它可以进一步提高到75.7%。源代码和预培训模型在https://github.com/maple-research-lab/caco中发布。

As a representative self-supervised method, contrastive learning has achieved great successes in unsupervised training of representations. It trains an encoder by distinguishing positive samples from negative ones given query anchors. These positive and negative samples play critical roles in defining the objective to learn the discriminative encoder, avoiding it from learning trivial features. While existing methods heuristically choose these samples, we present a principled method where both positive and negative samples are directly learnable end-to-end with the encoder. We show that the positive and negative samples can be cooperatively and adversarially learned by minimizing and maximizing the contrastive loss, respectively. This yields cooperative positives and adversarial negatives with respect to the encoder, which are updated to continuously track the learned representation of the query anchors over mini-batches. The proposed method achieves 71.3% and 75.3% in top-1 accuracy respectively over 200 and 800 epochs of pre-training ResNet-50 backbone on ImageNet1K without tricks such as multi-crop or stronger augmentations. With Multi-Crop, it can be further boosted into 75.7%. The source code and pre-trained model are released in https://github.com/maple-research-lab/caco.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源