论文标题

仔细观察自我监管的轻量级视觉变形金刚

A Closer Look at Self-Supervised Lightweight Vision Transformers

论文作者

Wang, Shaoru, Gao, Jin, Li, Zeming, Zhang, Xiaoqin, Hu, Weiming

论文摘要

作为训练方法的大规模视觉变压器(VIT)的自我监督学习已经达到了有希望的下游表现。然而,这些预训练范式促进了轻量级VIT的表现的程度要少得多。在这项工作中,我们在图像分类任务和一些下游密集的预测任务上开发和基准了几种自我监督的预训练方法。我们出乎意料地发现,如果采用适当的预训练,即使是香草轻量级VIT也显示出与具有精致建筑设计的先前SOTA网络相当的性能。它打破了最近流行的概念,即香草vits不适合轻量级制度中的视觉任务。我们还指出了此类预训练的某些缺陷,例如,未能从大规模的预训练数据中受益,并显示出在数据不足的下游任务上的性能较低。此外,我们通过分析相关模型的层表示和注意图的特性来分析并清楚地显示了这种预训练的效果。最后,基于上述分析,开发了预训练期间的蒸馏策略,从而导致基于MAE的预训练的下游性能进一步改善。代码可在https://github.com/wangsr126/mae-lite上找到。

Self-supervised learning on large-scale Vision Transformers (ViTs) as pre-training methods has achieved promising downstream performance. Yet, how much these pre-training paradigms promote lightweight ViTs' performance is considerably less studied. In this work, we develop and benchmark several self-supervised pre-training methods on image classification tasks and some downstream dense prediction tasks. We surprisingly find that if proper pre-training is adopted, even vanilla lightweight ViTs show comparable performance to previous SOTA networks with delicate architecture design. It breaks the recently popular conception that vanilla ViTs are not suitable for vision tasks in lightweight regimes. We also point out some defects of such pre-training, e.g., failing to benefit from large-scale pre-training data and showing inferior performance on data-insufficient downstream tasks. Furthermore, we analyze and clearly show the effect of such pre-training by analyzing the properties of the layer representation and attention maps for related models. Finally, based on the above analyses, a distillation strategy during pre-training is developed, which leads to further downstream performance improvement for MAE-based pre-training. Code is available at https://github.com/wangsr126/mae-lite.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源