论文标题

薄弱的增强指导关系自学学习

Weak Augmentation Guided Relational Self-Supervised Learning

论文作者

Zheng, Mingkai, You, Shan, Wang, Fei, Qian, Chen, Zhang, Changshui, Wang, Xiaogang, Xu, Chang

论文摘要

包括主流对比度学习在内的自学学习(SSL)在没有数据注释的情况下学习视觉表示方面取得了巨大成功。但是,大多数方法主要关注实例级别信息(\ ie,同一实例的不同增强图像应具有相同的功能或群集到同一类中),但是对不同实例之间的关系缺乏关注。在本文中,我们介绍了一种新颖的SSL范式,我们将其称为关系自学学习(RESSL)框架,该框架通过对不同实例之间的关系进行建模来学习表示形式。具体而言,我们提出的方法在不同实例之间采用了成对相似性的锐化分布,例如\ textit {resitation}公制,因此,它被用来匹配不同增强的特征嵌入。为了提高绩效,我们认为薄弱的增强物是代表更可靠的关系,并利用动量策略来实践效率。设计的不对称预测指标和Infonce热身策略增强了对超参数的鲁棒性,并使结果受益。实验结果表明,我们提出的RESSL大大优于不同网络体系结构(包括各种轻型网络(\ eg,EfficityNet和Mobilenet))的最先进方法。

Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations. However, most methods mainly focus on the instance level information (\ie, the different augmented images of the same instance should have the same feature or cluster into the same class), but there is a lack of attention on the relationships between different instances. In this paper, we introduce a novel SSL paradigm, which we term as relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances. Specifically, our proposed method employs sharpened distribution of pairwise similarities among different instances as \textit{relation} metric, which is thus utilized to match the feature embeddings of different augmentations. To boost the performance, we argue that weak augmentations matter to represent a more reliable relation, and leverage momentum strategy for practical efficiency. The designed asymmetric predictor head and an InfoNCE warm-up strategy enhance the robustness to hyper-parameters and benefit the resulting performance. Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures, including various lightweight networks (\eg, EfficientNet and MobileNet).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源