论文标题
自我监督的神经建筑搜索
Self-supervised Neural Architecture Search
论文作者
论文摘要
神经体系结构搜索(NAS)最近已被用来在各种任务中提高性能,并且在图像分类中最为突出。但是,当前的搜索策略依赖于大型标记的数据集,在只有较小的数据注释的情况下,限制了它们的用法。自我监督的学习在使用未标记的数据培训神经网络方面表现出了巨大的希望。在这项工作中,我们提出了一个自制的神经体系结构搜索(SSNA),该搜索允许在不需要标记数据的情况下找到新颖的网络模型。我们表明,这样的搜索导致与“完全标记” NAS的监督培训相当的结果,并且可以提高自我监督学习的表现。此外,当搜索中的标签数量相对较小时,我们证明了提出方法的优势。
Neural Architecture Search (NAS) has been used recently to achieve improved performance in various tasks and most prominently in image classification. Yet, current search strategies rely on large labeled datasets, which limit their usage in the case where only a smaller fraction of the data is annotated. Self-supervised learning has shown great promise in training neural networks using unlabeled data. In this work, we propose a self-supervised neural architecture search (SSNAS) that allows finding novel network models without the need for labeled data. We show that such a search leads to comparable results to supervised training with a "fully labeled" NAS and that it can improve the performance of self-supervised learning. Moreover, we demonstrate the advantage of the proposed approach when the number of labels in the search is relatively small.