论文标题
联合模式标签Denoing in denoing for弱监督视听视频解析
Joint-Modal Label Denoising for Weakly-Supervised Audio-Visual Video Parsing
论文作者
论文摘要
本文重点介绍了弱监督的视频视频解析任务,该任务旨在识别属于每种方式的所有事件并定位其时间界。此任务具有挑战性,因为只有表明视频事件的整体标签是为培训提供的。但是,一个事件可能被标记,但不会出现在其中一种方式中,这导致了特定于模态的嘈杂标签问题。在这项工作中,我们提出了一种培训策略,以动态识别和删除特定于模式的嘈杂标签。它是由两个关键观察的动机:1)网络倾向于首先学习干净的样本; 2)标记的事件至少以一种方式出现。具体而言,我们在每种模式中单独分别分别对所有实例的损失进行分类,然后根据模式内和模式间损失之间的关系选择嘈杂的样本。此外,我们还通过计算置信度低于预设阈值的实例的比例来提出一种简单但有效的噪声比率估计方法。我们的方法对先前的艺术状态进行了大幅改进(例如,在细分级视觉度量中从60.0 \%\%\%\%),这证明了我们方法的有效性。代码和训练有素的模型可在\ url {https://github.com/mcg-nju/jomold}上公开获得。
This paper focuses on the weakly-supervised audio-visual video parsing task, which aims to recognize all events belonging to each modality and localize their temporal boundaries. This task is challenging because only overall labels indicating the video events are provided for training. However, an event might be labeled but not appear in one of the modalities, which results in a modality-specific noisy label problem. In this work, we propose a training strategy to identify and remove modality-specific noisy labels dynamically. It is motivated by two key observations: 1) networks tend to learn clean samples first; and 2) a labeled event would appear in at least one modality. Specifically, we sort the losses of all instances within a mini-batch individually in each modality, and then select noisy samples according to the relationships between intra-modal and inter-modal losses. Besides, we also propose a simple but valid noise ratio estimation method by calculating the proportion of instances whose confidence is below a preset threshold. Our method makes large improvements over the previous state of the arts (e.g. from 60.0\% to 63.8\% in segment-level visual metric), which demonstrates the effectiveness of our approach. Code and trained models are publicly available at \url{https://github.com/MCG-NJU/JoMoLD}.