论文标题

Speechut:基于编码器的语音培训的桥接语音和文字,基于编码器的语音文本预训练

SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder Based Speech-Text Pre-training

论文作者

Zhang, Ziqiang, Zhou, Long, Ao, Junyi, Liu, Shujie, Dai, Lirong, Li, Jinyu, Wei, Furu

论文摘要

单模式预训练的快速发展促使研究人员更加关注跨模式的预训练方法。在本文中,我们提出了一个统一的模式语音单位文本预培训模型Sependut,以将语音编码器的表示和文本解码器与共享单元编码联系起来。利用隐藏单元作为对齐语音和文本的接口,我们可以将语音到文本模型分解为语音到单位模型和单位到文本模型,可以分别通过未配对的语音和文本数据共同预训练。我们拟议的演讲对自动语音识别(ASR)和语音翻译(ST)任务进行了微调和评估。实验结果表明,Speechut对强基础有实质性的改进,并在Librispeech ASR和Reser-C ST任务上都实现了最先进的性能。为了更好地了解拟议的演讲,进行了详细的分析。代码和预培训模型可在https://aka.ms/speechut上找到。

The rapid development of single-modal pre-training has prompted researchers to pay more attention to cross-modal pre-training methods. In this paper, we propose a unified-modal speech-unit-text pre-training model, SpeechUT, to connect the representations of a speech encoder and a text decoder with a shared unit encoder. Leveraging hidden-unit as an interface to align speech and text, we can decompose the speech-to-text model into a speech-to-unit model and a unit-to-text model, which can be jointly pre-trained with unpaired speech and text data respectively. Our proposed SpeechUT is fine-tuned and evaluated on automatic speech recognition (ASR) and speech translation (ST) tasks. Experimental results show that SpeechUT gets substantial improvements over strong baselines, and achieves state-of-the-art performance on both the LibriSpeech ASR and MuST-C ST tasks. To better understand the proposed SpeechUT, detailed analyses are conducted. The code and pre-trained models are available at https://aka.ms/SpeechUT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源