论文标题
视频对话的端到端多模式表示学习
End-to-End Multimodal Representation Learning for Video Dialog
论文作者
论文摘要
基于视频的对话框任务是一项具有挑战性的多模式学习任务,在过去的几年中,由于最先进的获得了新的绩效记录,因此在过去的几年中受到了越来越多的关注。这种进步在很大程度上是由更强大的基于变压器的语言编码器的改编提供的。尽管取得了这种进步,但现有方法并未有效利用视觉功能来帮助解决任务。最近的研究表明,最先进的模型偏向文本信息而不是视觉提示。为了更好地利用可用的视觉信息,本研究提出了一个新框架,将3D-CNN网络和基于变压器的网络结合到单个视觉编码器中,以从视频中提取更强大的语义表示。视觉编码器与其他输入方式(例如文本和音频)共同训练了端到端。在生成和检索任务中,AVSD任务上的实验表现出对基准的显着改善。
Video-based dialog task is a challenging multimodal learning task that has received increasing attention over the past few years with state-of-the-art obtaining new performance records. This progress is largely powered by the adaptation of the more powerful transformer-based language encoders. Despite this progress, existing approaches do not effectively utilize visual features to help solve tasks. Recent studies show that state-of-the-art models are biased toward textual information rather than visual cues. In order to better leverage the available visual information, this study proposes a new framework that combines 3D-CNN network and transformer-based networks into a single visual encoder to extract more robust semantic representations from videos. The visual encoder is jointly trained end-to-end with other input modalities such as text and audio. Experiments on the AVSD task show significant improvement over baselines in both generative and retrieval tasks.