论文标题
基于变压器的语言建模和对话语音识别的解码
Transformer-based language modeling and decoding for conversational speech recognition
论文作者
论文摘要
我们建议一种在对话语音识别中使用基于变压器的语言模型的方法。具体而言,我们专注于在加权有限态传感器框架中有效地解码。我们展示了一种重新评分的方法,该方法允许通过基于转媒体的语言模型捕获的更长范围的历史记录,并利用变压器依次避免计算的能力。
We propose a way to use a transformer-based language model in conversational speech recognition. Specifically, we focus on decoding efficiently in a weighted finite-state transducer framework. We showcase an approach to lattice re-scoring that allows for longer range history captured by a transfomer-based language model and takes advantage of a transformer's ability to avoid computing sequentially.