论文标题
带有变压器的端到端视频实例分割
End-to-End Video Instance Segmentation with Transformers
论文作者
论文摘要
视频实例分割(VIS)是需要同时对视频中感兴趣的对象实例进行分类,分割和跟踪对象实例的任务。最近的方法通常会开发出复杂的管道来解决这项任务。在这里,我们提出了一个新的视频实例分割框架,该框架构建在变压器上,称为Vistr,该框架将VIS任务视为直接端到端并行序列解码/预测问题。给定一个由多个图像帧作为输入组成的视频剪辑,VISTR将直接按顺序输出视频中每个实例的掩码序列。核心是一种新的有效实例序列匹配和分割策略,该策略在整个序列级别监督和段实例。 VISTR以相似性学习的相同角度将实例细分和跟踪构图,因此大大简化了整体管道,并且与现有方法有显着差异。如果没有铃铛和口哨声,Vistr就可以在所有现有的VIS模型中达到最高速度,并在使用YouTube-VIS数据集上的单个模型的方法中取得了最佳结果。我们第一次演示了建立在变压器基于变压器的视频实例细分框架上,实现了竞争精度。我们希望Vistr能够激励未来的研究以进行更多的视频理解任务。
Video instance segmentation (VIS) is the task that requires simultaneously classifying, segmenting and tracking object instances of interest in video. Recent methods typically develop sophisticated pipelines to tackle this task. Here, we propose a new video instance segmentation framework built upon Transformers, termed VisTR, which views the VIS task as a direct end-to-end parallel sequence decoding/prediction problem. Given a video clip consisting of multiple image frames as input, VisTR outputs the sequence of masks for each instance in the video in order directly. At the core is a new, effective instance sequence matching and segmentation strategy, which supervises and segments instances at the sequence level as a whole. VisTR frames the instance segmentation and tracking in the same perspective of similarity learning, thus considerably simplifying the overall pipeline and is significantly different from existing approaches. Without bells and whistles, VisTR achieves the highest speed among all existing VIS models, and achieves the best result among methods using single model on the YouTube-VIS dataset. For the first time, we demonstrate a much simpler and faster video instance segmentation framework built upon Transformers, achieving competitive accuracy. We hope that VisTR can motivate future research for more video understanding tasks.