论文标题
基于支持集的基于多模式表示的视频字幕增强
Support-set based Multi-modal Representation Enhancement for Video Captioning
论文作者
论文摘要
视频字幕是一项艰巨的任务,需要对视觉场景进行彻底理解。现有方法遵循典型的一对一映射,该映射集中在有限的样本空间上,同时忽略样品之间的内在语义关联,从而产生刚性和非信息表达式。为了解决这个问题,我们提出了一个新颖而灵活的框架,即基于支持集的多模式表示增强(SMRE)模型,以在样本之间共享的语义子空间中开采丰富的信息。具体而言,我们提出了一个支持集构建(SC)模块,以构建一个支持集,以学习样品之间的基本连接并获得与语义相关的视觉元素。在此过程中,我们设计一个语义空间变换(SST)模块,以自我监督的方式约束相对距离和管理多模式相互作用。 MSVD和MSR-VTT数据集的广泛实验表明,我们的SMRE实现了最先进的性能。
Video captioning is a challenging task that necessitates a thorough comprehension of visual scenes. Existing methods follow a typical one-to-one mapping, which concentrates on a limited sample space while ignoring the intrinsic semantic associations between samples, resulting in rigid and uninformative expressions. To address this issue, we propose a novel and flexible framework, namely Support-set based Multi-modal Representation Enhancement (SMRE) model, to mine rich information in a semantic subspace shared between samples. Specifically, we propose a Support-set Construction (SC) module to construct a support-set to learn underlying connections between samples and obtain semantic-related visual elements. During this process, we design a Semantic Space Transformation (SST) module to constrain relative distance and administrate multi-modal interactions in a self-supervised way. Extensive experiments on MSVD and MSR-VTT datasets demonstrate that our SMRE achieves state-of-the-art performance.