论文标题
基于模式分类的端到端唇同步
End-to-End Lip Synchronisation Based on Pattern Classification
论文作者
论文摘要
这项工作的目的是使用深层神经网络模型同步音频和视频。现有作品已经训练了有关代理任务(例如跨模式相似性学习)的网络,然后使用滑动窗口方法计算音频和视频帧之间的相似性。尽管这些方法表现出令人满意的性能,但网络未直接在任务上训练。为此,我们提出了一个端到端训练的网络,该网络可以直接预测音频流和相应的视频流之间的偏移。首先从特征计算出两个模式之间的相似性矩阵,然后将偏移的推断视为模式识别问题,其中矩阵被认为等于图像。特征提取器和分类器是共同训练的。我们证明,所提出的方法在LRS2和LRS3数据集上的优势超过了先前的工作。
The goal of this work is to synchronise audio and video of a talking face using deep neural network models. Existing works have trained networks on proxy tasks such as cross-modal similarity learning, and then computed similarities between audio and video frames using a sliding window approach. While these methods demonstrate satisfactory performance, the networks are not trained directly on the task. To this end, we propose an end-to-end trained network that can directly predict the offset between an audio stream and the corresponding video stream. The similarity matrix between the two modalities is first computed from the features, then the inference of the offset can be considered to be a pattern recognition problem where the matrix is considered equivalent to an image. The feature extractor and the classifier are trained jointly. We demonstrate that the proposed approach outperforms the previous work by a large margin on LRS2 and LRS3 datasets.