论文标题
部分可观测时空混沌系统的无模型预测
Two Video Data Sets for Tracking and Retrieval of Out of Distribution Objects
论文作者
论文摘要
在这项工作中,我们为新型计算机视觉(CV)任务提供了两个视频测试数据集(OOD跟踪)。在这里,OOD对象被理解为具有语义类别的对象,在基础图像分割算法的语义空间之外,或在语义空间中的实例,而该实例与训练数据中包含的实例有果断地不同。在视频序列上发生的OOD对象应尽早在单帧上检测到,并在外观上尽可能长时间进行跟踪。在外观期间,应尽可能精确地分割它们。我们介绍了SOS数据集,其中包含20个街道场景的视频序列和1000多个标签帧,最多有两个OOD对象。此外,我们发布了合成的Carla-Wildlife数据集,该数据集由26个视频序列组成,其中包含一个最多四个OOD对象。我们建议指标来测量OOD跟踪的成功并开发有效跟踪OOD对象的基线算法。作为从OOD跟踪中受益的应用程序,我们从包含OOD对象的街道场景的未标记视频中检索OOD序列。
In this work we present two video test data sets for the novel computer vision (CV) task of out of distribution tracking (OOD tracking). Here, OOD objects are understood as objects with a semantic class outside the semantic space of an underlying image segmentation algorithm, or an instance within the semantic space which however looks decisively different from the instances contained in the training data. OOD objects occurring on video sequences should be detected on single frames as early as possible and tracked over their time of appearance as long as possible. During the time of appearance, they should be segmented as precisely as possible. We present the SOS data set containing 20 video sequences of street scenes and more than 1000 labeled frames with up to two OOD objects. We furthermore publish the synthetic CARLA-WildLife data set that consists of 26 video sequences containing up to four OOD objects on a single frame. We propose metrics to measure the success of OOD tracking and develop a baseline algorithm that efficiently tracks the OOD objects. As an application that benefits from OOD tracking, we retrieve OOD sequences from unlabeled videos of street scenes containing OOD objects.