论文标题
以自我为中心的视听噪声抑制
Egocentric Audio-Visual Noise Suppression
论文作者
论文摘要
本文研究了以抑制自觉性视频的视听噪声 - 在视频中未捕获说话者。取而代之的是,在屏幕上可以看到潜在的噪声源,而相机模仿了屏幕外扬声器对外界的看法。该设置与依赖唇部和面部视觉效果的视听语音增强的先前工作不同。在本文中,我们首先证明以自我为中心的视觉信息有助于抑制噪声。我们比较基于对象识别和动作分类的视觉特征提取器,并研究如何对齐音频和视觉表示。然后,我们检查了对齐特征的不同融合策略,以及噪声抑制模型中的位置以结合视觉信息。实验表明,用于生成添加校正面膜时,视觉特征最有帮助。最后,为了确保视觉特征相对于不同的噪声类型具有歧视性,我们引入了一个多任务学习框架,该框架共同优化了视听噪声抑制和基于视频的声学事件检测。该提出的多任务框架的表现优于所有指标的仅音频基线,包括0.16 PESQ的改进。广泛的消融揭示了提出的模型的性能提高了多种主动分散术,总体噪声类型以及跨不同的SNR的性能。
This paper studies audio-visual noise suppression for egocentric videos -- where the speaker is not captured in the video. Instead, potential noise sources are visible on screen with the camera emulating the off-screen speaker's view of the outside world. This setting is different from prior work in audio-visual speech enhancement that relies on lip and facial visuals. In this paper, we first demonstrate that egocentric visual information is helpful for noise suppression. We compare object recognition and action classification-based visual feature extractors and investigate methods to align audio and visual representations. Then, we examine different fusion strategies for the aligned features, and locations within the noise suppression model to incorporate visual information. Experiments demonstrate that visual features are most helpful when used to generate additive correction masks. Finally, in order to ensure that the visual features are discriminative with respect to different noise types, we introduce a multi-task learning framework that jointly optimizes audio-visual noise suppression and video-based acoustic event detection. This proposed multi-task framework outperforms the audio-only baseline on all metrics, including a 0.16 PESQ improvement. Extensive ablations reveal the improved performance of the proposed model with multiple active distractors, overall noise types, and across different SNRs.