论文标题

Mixco:视觉表示的混合对比度学习

MixCo: Mix-up Contrastive Learning for Visual Representation

论文作者

Kim, Sungnyun, Lee, Gihun, Bae, Sangmin, Yun, Se-Young

论文摘要

对比度学习在最近的自我监督方法中显示出了显着的结果。通过学会对相应的负面对形成对比积极对的表示,可以训练无人注释的良好视觉表示。本文提出了混合对比度(Mixco),该对比度将对比度学习概念扩展到了从正面和负面图像的混合中编码的半求个阳性。 Mixco旨在了解表示形式的相对相似性,以反映混合图像具有原始阳性的程度。当在Tinyimagenet,CIFAR10和CIFAR100的标准线性评估方案下,将Mixco应用于最近的自我监督学习算法时,我们验证了Mixco的疗效。在实验中,MixCO始终提高测试准确性。值得注意的是,当学习能力(例如,模型大小)受到限制时,改进更为重要,这表明Mixco在现实世界中可能更有用。该代码可在以下网址提供:https://github.com/lee-gihun/mixco-mixup-contrast。

Contrastive learning has shown remarkable results in recent self-supervised approaches for visual representation. By learning to contrast positive pairs' representation from the corresponding negatives pairs, one can train good visual representations without human annotations. This paper proposes Mix-up Contrast (MixCo), which extends the contrastive learning concept to semi-positives encoded from the mix-up of positive and negative images. MixCo aims to learn the relative similarity of representations, reflecting how much the mixed images have the original positives. We validate the efficacy of MixCo when applied to the recent self-supervised learning algorithms under the standard linear evaluation protocol on TinyImageNet, CIFAR10, and CIFAR100. In the experiments, MixCo consistently improves test accuracy. Remarkably, the improvement is more significant when the learning capacity (e.g., model size) is limited, suggesting that MixCo might be more useful in real-world scenarios. The code is available at: https://github.com/Lee-Gihun/MixCo-Mixup-Contrast.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源