论文标题
更简单是更好的:通过预处理的骨干持续学习
Simpler is Better: off-the-shelf Continual Learning Through Pretrained Backbones
论文作者
论文摘要
在这篇简短的论文中,我们提出了一个基线(现成的),以通过利用预验证的模型的力量来持续学习计算机视觉问题。通过这样做,我们设计了一种简单的方法,可以为大多数常见的基准测试实现强大的性能。我们的方法很快,因为不需要参数更新,并且具有最小的内存要求(KBYTES的顺序)。特别是,“训练”阶段会重新定位数据,并利用验证模型的功能来计算类原型并填充内存库。在推理时,我们通过类似KNN的方法匹配最接近的原型,为我们提供了预测。我们将看到这种天真的解决方案如何充当现成的持续学习系统。为了更好地巩固我们的结果,我们将设计的管道与常见的CNN模型进行了比较,并显示了视觉变压器的优越性,这表明此类架构具有产生较高质量的功能。此外,这条简单的管道提出了与以前的作品有关的相同问题\ cite {gdumb}对CL社区的有效进度,尤其是在考虑的数据集和预审预周读的模型中所取得的有效进展。代码在https://github.com/francesco-p/off-the-helf-cl-cl上直播
In this short paper, we propose a baseline (off-the-shelf) for Continual Learning of Computer Vision problems, by leveraging the power of pretrained models. By doing so, we devise a simple approach achieving strong performance for most of the common benchmarks. Our approach is fast since requires no parameters updates and has minimal memory requirements (order of KBytes). In particular, the "training" phase reorders data and exploit the power of pretrained models to compute a class prototype and fill a memory bank. At inference time we match the closest prototype through a knn-like approach, providing us the prediction. We will see how this naive solution can act as an off-the-shelf continual learning system. In order to better consolidate our results, we compare the devised pipeline with common CNN models and show the superiority of Vision Transformers, suggesting that such architectures have the ability to produce features of higher quality. Moreover, this simple pipeline, raises the same questions raised by previous works \cite{gdumb} on the effective progresses made by the CL community especially in the dataset considered and the usage of pretrained models. Code is live at https://github.com/francesco-p/off-the-shelf-cl