论文标题

VMFORMER:带有变压器的端到端视频垫子

VMFormer: End-to-End Video Matting with Transformer

论文作者

Li, Jiachen, Goel, Vidit, Ohanyan, Marianna, Navasardyan, Shant, Wei, Yunchao, Shi, Humphrey

论文摘要

Video Matting旨在通过给定的输入视频序列预测每个帧的Alpha哑光。在过去的几年中,深度卷积神经网络(CNN)的最新解决方案一直由深度卷积神经网络(CNN)主导,这已成为学术界和行业的事实上的标准。但是,它们具有内置的当地归纳偏见,并且由于基于CNN的架构而不会捕获图像的全局特征。在处理多个帧的特征图时,考虑到计算成本,他们还缺乏远程时间建模。在本文中,我们提出了VMFormer:一种基于变压器的端对端方法,用于视频垫。它可以通过可学习的查询对每个帧的α哑光进行预测,并给定视频输入序列。具体而言,它利用自我发挥的层来构建特征序列的全局集成,并在连续的帧上使用短距离的时间建模。我们进一步应用查询来通过在所有查询上进行远程时间建模的变压器解码器中的交叉注意来学习全局表示形式。在预测阶段,查询和相应的特征图均用于对Alpha Matte的最终预测。实验表明,VMFormer在合成基准测试上的表现优于先前基于CNN的视频效果方法。据我们所知,这是第一个基于完整视觉变压器建立的端到端视频底漆解决方案,并对可学习的查询进行预测。该项目在https://chrisjuniorli.github.io/project/project/vmformer/上开源

Video matting aims to predict the alpha mattes for each frame from a given input video sequence. Recent solutions to video matting have been dominated by deep convolutional neural networks (CNN) for the past few years, which have become the de-facto standard for both academia and industry. However, they have inbuilt inductive bias of locality and do not capture global characteristics of an image due to the CNN-based architectures. They also lack long-range temporal modeling considering computational costs when dealing with feature maps of multiple frames. In this paper, we propose VMFormer: a transformer-based end-to-end method for video matting. It makes predictions on alpha mattes of each frame from learnable queries given a video input sequence. Specifically, it leverages self-attention layers to build global integration of feature sequences with short-range temporal modeling on successive frames. We further apply queries to learn global representations through cross-attention in the transformer decoder with long-range temporal modeling upon all queries. In the prediction stage, both queries and corresponding feature maps are used to make the final prediction of alpha matte. Experiments show that VMFormer outperforms previous CNN-based video matting methods on the composited benchmarks. To our best knowledge, it is the first end-to-end video matting solution built upon a full vision transformer with predictions on the learnable queries. The project is open-sourced at https://chrisjuniorli.github.io/project/VMFormer/

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源