论文标题

无监督表示的原型对比度学习

Prototypical Contrastive Learning of Unsupervised Representations

论文作者

Li, Junnan, Zhou, Pan, Xiong, Caiming, Hoi, Steven C. H.

论文摘要

本文提出了原型对比度学习(PCL),这是一种无监督的表示学习方法,旨在解决实例对比度学习的基本局限性。 PCL不仅学习实例歧视任务的低级功能,而且更重要的是,它隐式地将数据的语义结构编码到了学习的嵌入空间中。具体而言,我们将原型作为潜在变量引入,以帮助在期望最大化框架中找到网络参数的最大样本估计。我们迭代地执行电子步骤,因为通过聚类和M-Step找到原型的分布,以通过对比度学习优化网络。我们提出了Protonce丢失,这是对比度学习的Infonce损失的广义版本,这鼓励表示形式更接近其指定的原型。 PCL在多个基准测试基准上的最先进的实例学习方法优于最先进的实例学习方法,这些方法在低资源转移学习方面有了很大的改善。代码和预估计的模型可在https://github.com/salesforce/pcl上找到。

This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that addresses the fundamental limitations of instance-wise contrastive learning. PCL not only learns low-level features for the task of instance discrimination, but more importantly, it implicitly encodes semantic structures of the data into the learned embedding space. Specifically, we introduce prototypes as latent variables to help find the maximum-likelihood estimation of the network parameters in an Expectation-Maximization framework. We iteratively perform E-step as finding the distribution of prototypes via clustering and M-step as optimizing the network via contrastive learning. We propose ProtoNCE loss, a generalized version of the InfoNCE loss for contrastive learning, which encourages representations to be closer to their assigned prototypes. PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks with substantial improvement in low-resource transfer learning. Code and pretrained models are available at https://github.com/salesforce/PCL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源