论文标题
ST-ADAPTER:参数高效的图像到视频传输学习
ST-Adapter: Parameter-Efficient Image-to-Video Transfer Learning
论文作者
论文摘要
最近出现了有希望的表现,利用大型预训练的模型来完成各种感兴趣的下游任务。由于模型的规模不断增长,因此,在模型培训和存储方面,基于标准的完整任务适应策略变得昂贵。这导致了参数效率转移学习的新研究方向。但是,现有的尝试通常集中在预训练模型的同一模态(例如,图像理解)的下游任务上。这会产生限制,因为在某些特定方式(例如,视频理解)中,具有足够知识的强大预训练模型是较少或不可用的。在这项工作中,我们研究了这种新型的跨模式传递学习设置,即参数有效的图像到视频传输学习。为了解决这个问题,我们建议一个新的时空适配器(ST-ADAPTER),用于每个视频任务的参数有效调整。凭借紧凑的设计中内置时空推理能力,ST-ADAPTER可以实现没有时间知识的预训练的图像模型,以小(约8%)的每任态参数成本来理解动态视频内容,与以前的工作相比,更新的更新参数大约需要大约20倍。在视频动作识别任务上进行的广泛实验表明,我们的ST-ADAPTER可以匹配甚至优于强大的完整微调策略和最先进的视频模型,同时享受参数效率的优势。该代码和型号可在https://github.com/linziyi96/st-adapter上找到
Capitalizing on large pre-trained models for various downstream tasks of interest have recently emerged with promising performance. Due to the ever-growing model size, the standard full fine-tuning based task adaptation strategy becomes prohibitively costly in terms of model training and storage. This has led to a new research direction in parameter-efficient transfer learning. However, existing attempts typically focus on downstream tasks from the same modality (e.g., image understanding) of the pre-trained model. This creates a limit because in some specific modalities, (e.g., video understanding) such a strong pre-trained model with sufficient knowledge is less or not available. In this work, we investigate such a novel cross-modality transfer learning setting, namely parameter-efficient image-to-video transfer learning. To solve this problem, we propose a new Spatio-Temporal Adapter (ST-Adapter) for parameter-efficient fine-tuning per video task. With a built-in spatio-temporal reasoning capability in a compact design, ST-Adapter enables a pre-trained image model without temporal knowledge to reason about dynamic video content at a small (~8%) per-task parameter cost, requiring approximately 20 times fewer updated parameters compared to previous work. Extensive experiments on video action recognition tasks show that our ST-Adapter can match or even outperform the strong full fine-tuning strategy and state-of-the-art video models, whilst enjoying the advantage of parameter efficiency. The code and model are available at https://github.com/linziyi96/st-adapter