论文标题

自称:自我监督学习的自动增强政策

SelfAugment: Automatic Augmentation Policies for Self-Supervised Learning

论文作者

Reed, Colorado J, Metzger, Sean, Srinivas, Aravind, Darrell, Trevor, Keutzer, Kurt

论文摘要

无监督的表示学习中的一种常见实践是使用标记的数据来评估学习表示的质量。然后,该监督评估被用来指导培训过程的关键方面,例如选择数据增强政策。但是,对于不包含标签的现实数据,无法通过监督评估指导无监督的培训过程(例如,在诸如医学成像之类的隐私敏感领域中,情况)。因此,在这项工作中,我们表明,通过自我监督的图像旋转任务评估学习的表示,与一组标准的监督评估(等级相关性$> 0.94 $)高度相关。我们建立了数百个增强策略,培训设置和网络体系结构的相关性,并提供了算法(自我振奋),以自动有效地选择增强策略而无需使用监督评估。尽管没有使用任何标记的数据,但学习的增强策略与使用详尽的监督评估确定的增强策略相似。

A common practice in unsupervised representation learning is to use labeled data to evaluate the quality of the learned representations. This supervised evaluation is then used to guide critical aspects of the training process such as selecting the data augmentation policy. However, guiding an unsupervised training process through supervised evaluations is not possible for real-world data that does not actually contain labels (which may be the case, for example, in privacy sensitive fields such as medical imaging). Therefore, in this work we show that evaluating the learned representations with a self-supervised image rotation task is highly correlated with a standard set of supervised evaluations (rank correlation $> 0.94$). We establish this correlation across hundreds of augmentation policies, training settings, and network architectures and provide an algorithm (SelfAugment) to automatically and efficiently select augmentation policies without using supervised evaluations. Despite not using any labeled data, the learned augmentation policies perform comparably with augmentation policies that were determined using exhaustive supervised evaluations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源