论文标题

统一的多感官感知:弱监督视听视频解析

Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing

论文作者

Tian, Yapeng, Li, Dingzeyu, Xu, Chenliang

论文摘要

在本文中,我们引入了一个新问题,名为Audio-Visual Video解析,旨在将视频解析为时间事件段,并将其标记为可听见,可见或两者兼而有之。这样的问题对于完全了解视频中描绘的场景至关重要。为了促进探索,我们收集外观,聆听和解析(LLP)数据集,以弱监督的方式调查视频视频解析。此任务可以自然地作为多模式多模式学习(MMIL)问题。具体而言,我们提出了一个新型的混合注意网络,以同时探索单峰和跨模式的时间环境。我们开发了一种细心的MMIL合并方法,可以从不同的时间范围和模态自适应地探索有用的音频和视觉内容。此外,我们分别通过个人指导的学习机制和标签平滑技术发现并减轻模式偏见和嘈杂的标签问题。实验结果表明,即使仅使用视频级弱标签,也可以实现具有挑战性的视频视频解析。我们提出的框架可以有效利用单峰和跨模式的时间环境,并减轻模式偏见和嘈杂标签的问题。

In this paper, we introduce a new problem, named audio-visual video parsing, which aims to parse a video into temporal event segments and label them as either audible, visible, or both. Such a problem is essential for a complete understanding of the scene depicted inside a video. To facilitate exploration, we collect a Look, Listen, and Parse (LLP) dataset to investigate audio-visual video parsing in a weakly-supervised manner. This task can be naturally formulated as a Multimodal Multiple Instance Learning (MMIL) problem. Concretely, we propose a novel hybrid attention network to explore unimodal and cross-modal temporal contexts simultaneously. We develop an attentive MMIL pooling method to adaptively explore useful audio and visual content from different temporal extent and modalities. Furthermore, we discover and mitigate modality bias and noisy label issues with an individual-guided learning mechanism and label smoothing technique, respectively. Experimental results show that the challenging audio-visual video parsing can be achieved even with only video-level weak labels. Our proposed framework can effectively leverage unimodal and cross-modal temporal contexts and alleviate modality bias and noisy labels problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源