论文标题
端到端多人音频/视觉自动语音识别
End-to-End Multi-Person Audio/Visual Automatic Speech Recognition
论文作者
论文摘要
传统上,在视觉信号上的口语面是与音频匹配的面部的假设,对视听自动语音识别进行了研究。但是,在更现实的环境中,当多个面孔可能在屏幕上时,人们需要决定要馈送A/V ASR系统的面孔。目前的工作将A/V ASR的最新进展更进一步,并考虑了多个人同时在屏幕上的情况(多人A/V ASR)。我们提出了一个完全可区分的A/V ASR模型,该模型能够在视频中处理多个面部轨道。我们不依靠两种单独的型号来进行单个面部轨道上的扬声器面部选择和视听ASR,而是向ASR编码器介绍了一个注意力层,以便能够软选择适当的面部视频轨道。在超过30k YouTube视频的A/V系统上进行的实验表明,与Oracle选择说话的面孔相比,所提出的方法可以自动选择具有轻微降解的适当面部轨道,同时仍然显示出使用视觉信号的好处,而不是单独使用音频。
Traditionally, audio-visual automatic speech recognition has been studied under the assumption that the speaking face on the visual signal is the face matching the audio. However, in a more realistic setting, when multiple faces are potentially on screen one needs to decide which face to feed to the A/V ASR system. The present work takes the recent progress of A/V ASR one step further and considers the scenario where multiple people are simultaneously on screen (multi-person A/V ASR). We propose a fully differentiable A/V ASR model that is able to handle multiple face tracks in a video. Instead of relying on two separate models for speaker face selection and audio-visual ASR on a single face track, we introduce an attention layer to the ASR encoder that is able to soft-select the appropriate face video track. Experiments carried out on an A/V system trained on over 30k hours of YouTube videos illustrate that the proposed approach can automatically select the proper face tracks with minor WER degradation compared to an oracle selection of the speaking face while still showing benefits of employing the visual signal instead of the audio alone.