论文标题

图像文本标签空间中的统一对比度学习

Unified Contrastive Learning in Image-Text-Label Space

论文作者

Yang, Jianwei, Li, Chunyuan, Zhang, Pengchuan, Xiao, Bin, Liu, Ce, Yuan, Lu, Gao, Jianfeng

论文摘要

最近,通过对人类宣传的图像标签数据的监督学习或语言图像对比度学习,并通过webly缩写的图像文本对学习视觉识别。尽管有监督的学习可能会导致更具歧视性的表示,但语言图像预处理显示出了前所未有的零镜头识别能力,这在很大程度上是由于数据源和学习目标的不同属性。在这项工作中,我们通过将两个数据源相结合到一个常见的图像text标签空间中引入了新的公式。在这个领域,我们提出了一种新的学习范式,称为统一的对比学习(UNICL),具有一个单个学习目标,以无缝提示两种数据类型的协同作用。广泛的实验表明,我们的UNICL是学习语义丰富而歧视性表示的有效方法,在零拍,线性探针,完全填充和转移学习方案中普遍识别图像识别。特别是,在语言形象对比度学习和监督学习方法上,它的平均含量分别为零拍识别基准的平均值高达9.2%和14.5%。在线性探针设置中,这也将两种方法上的性能分别提高了7.3%和3.4%。我们的研究还表明,UNICL独立者是纯图像标签数据的好学习者,与三个图像分类数据集和两种类型的视觉骨架,Resnet和Swin Transformer竞争了监督的学习方法。代码可在https://github.com/microsoft/unicl上找到。

Visual recognition is recently learned via either supervised learning on human-annotated image-label data or language-image contrastive learning with webly-crawled image-text pairs. While supervised learning may result in a more discriminative representation, language-image pretraining shows unprecedented zero-shot recognition capability, largely due to the different properties of data sources and learning objectives. In this work, we introduce a new formulation by combining the two data sources into a common image-text-label space. In this space, we propose a new learning paradigm, called Unified Contrastive Learning (UniCL) with a single learning objective to seamlessly prompt the synergy of two data types. Extensive experiments show that our UniCL is an effective way of learning semantically rich yet discriminative representations, universally for image recognition in zero-shot, linear-probe, fully finetuning and transfer learning scenarios. Particularly, it attains gains up to 9.2% and 14.5% in average on zero-shot recognition benchmarks over the language-image contrastive learning and supervised learning methods, respectively. In linear probe setting, it also boosts the performance over the two methods by 7.3% and 3.4%, respectively. Our study also indicates that UniCL stand-alone is a good learner on pure image-label data, rivaling the supervised learning methods across three image classification datasets and two types of vision backbones, ResNet and Swin Transformer. Code is available at https://github.com/microsoft/UniCL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源