论文标题

MFCCA:多方扬声器ASR的多帧跨渠道关注在多方会议场景中

MFCCA:Multi-Frame Cross-Channel attention for multi-speaker ASR in Multi-party meeting scenario

论文作者

Yu, Fan, Zhang, Shiliang, Guo, Pengcheng, Liang, Yuhao, Du, Zhihao, Lin, Yuxiao, Xie, Lei

论文摘要

最近,跨渠道的注意力更好地利用了麦克风阵列的多通道信号,在多方会议场景中显示出令人鼓舞的结果。跨通道注意力集中在学习不同通道序列之间的全局相关性,或者在每个时间步骤中有效地利用细粒度的通道信息。考虑到接收声音的麦克风阵列的延迟,我们提出了一个多帧的跨渠道注意力,该频率频道的注意力是在相邻框架之间建模跨渠道信息,以利用框架和通道知识的互补性。此外,我们还提出了一种多层卷积机制,以融合多通道输出和通道掩盖策略,以应对训练和推理之间的通道数不匹配问题。关于Alimeeting的实验,一种现实世界中的语料库,表明我们所提出的模型的表现优于单渠道模型31.7 \%\%和37.0 \%\%CER降低,而评估和测试集则降低了。此外,使用可比的模型参数和培训数据,我们提出的模型与ICASSP2022 M2Met挑战中的最高排名系统相比,在Alimeeting语料库上实现了新的SOTA性能,这是最近举行的多通道多演讲者ASR ASR挑战。

Recently cross-channel attention, which better leverages multi-channel signals from microphone array, has shown promising results in the multi-party meeting scenario. Cross-channel attention focuses on either learning global correlations between sequences of different channels or exploiting fine-grained channel-wise information effectively at each time step. Considering the delay of microphone array receiving sound, we propose a multi-frame cross-channel attention, which models cross-channel information between adjacent frames to exploit the complementarity of both frame-wise and channel-wise knowledge. Besides, we also propose a multi-layer convolutional mechanism to fuse the multi-channel output and a channel masking strategy to combat the channel number mismatch problem between training and inference. Experiments on the AliMeeting, a real-world corpus, reveal that our proposed model outperforms single-channel model by 31.7\% and 37.0\% CER reduction on Eval and Test sets. Moreover, with comparable model parameters and training data, our proposed model achieves a new SOTA performance on the AliMeeting corpus, as compared with the top ranking systems in the ICASSP2022 M2MeT challenge, a recently held multi-channel multi-speaker ASR challenge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源