论文标题
综合看不见的零摄像对象检测
Synthesizing the Unseen for Zero-shot Object Detection
论文作者
论文摘要
现有的零射击检测方法将项目视觉特征带到可见对象的语义域,希望在推理过程中将未见对象映射到其相应的语义。但是,由于在训练过程中永远不会看到看不见的对象,因此检测模型偏向可见的内容,从而将看不见的标记为背景或可见类。在这项工作中,我们建议将视觉特征综合为看不见的类,以便该模型在视觉域中学习和看不见的对象。因此,主要的挑战是如何仅使用其类语义来准确综合看不见的对象?为了实现这一雄心勃勃的目标,我们提出了一种新颖的生成模型,该模型不仅使用班级音乐来产生特征,而且还可以区分它们。此外,使用统一模型,我们确保合成的特征具有较高的多样性,代表了检测到的边界框中的类内部差异和可变定位精度。我们在常规和广义设置下对三个对象检测基准,Pascal VOC,MSCOCO和ILSVRC检测进行测试,对最先进的方法显示出令人印象深刻的收益。我们的代码可在https://github.com/nasir6/zero_shot_detection上找到。
The existing zero-shot detection approaches project visual features to the semantic domain for seen objects, hoping to map unseen objects to their corresponding semantics during inference. However, since the unseen objects are never visualized during training, the detection model is skewed towards seen content, thereby labeling unseen as background or a seen class. In this work, we propose to synthesize visual features for unseen classes, so that the model learns both seen and unseen objects in the visual domain. Consequently, the major challenge becomes, how to accurately synthesize unseen objects merely using their class semantics? Towards this ambitious goal, we propose a novel generative model that uses class-semantics to not only generate the features but also to discriminatively separate them. Further, using a unified model, we ensure the synthesized features have high diversity that represents the intra-class differences and variable localization precision in the detected bounding boxes. We test our approach on three object detection benchmarks, PASCAL VOC, MSCOCO, and ILSVRC detection, under both conventional and generalized settings, showing impressive gains over the state-of-the-art methods. Our codes are available at https://github.com/nasir6/zero_shot_detection.