论文标题
在ASR的RNN-T中的预测网络体系结构上
On the Prediction Network Architecture in RNN-T for ASR
论文作者
论文摘要
RNN-T模型由于其在线流媒体模式下运营的竞争力和能力,因此在文献和商业系统中广受欢迎。在这项工作中,我们进行了一项广泛的研究,比较了单调和原始RNN-T模型的几种预测网络架构。我们根据常见的最新构象编码器比较了4种类型的预测网络,并在LibrisPeech和内部医学对话数据集上获得了报告结果。我们的研究涵盖了离线批处理模式和在线流媒体方案。与以前的一些作品相反,我们的结果表明,当用作预测网络以及构象异构体编码器时,变压器并不总是胜过LSTM。受分数启发的启发,我们提出了一个新的简单预测网络架构N-Concat,在我们在线流式传输基准测试中的表现优于其他架构。变形金刚和N-Gram降低的体系结构的性能非常相似,但在先前的上下文方面具有一些重要的不同行为。总体而言,与LSTM基线相比,我们获得了多达4.1%的相对相对改善,同时将预测网络参数降低了几乎数量级(8.4倍)。
RNN-T models have gained popularity in the literature and in commercial systems because of their competitiveness and capability of operating in online streaming mode. In this work, we conduct an extensive study comparing several prediction network architectures for both monotonic and original RNN-T models. We compare 4 types of prediction networks based on a common state-of-the-art Conformer encoder and report results obtained on Librispeech and an internal medical conversation data set. Our study covers both offline batch-mode and online streaming scenarios. In contrast to some previous works, our results show that Transformer does not always outperform LSTM when used as prediction network along with Conformer encoder. Inspired by our scoreboard, we propose a new simple prediction network architecture, N-Concat, that outperforms the others in our on-line streaming benchmark. Transformer and n-gram reduced architectures perform very similarly yet with some important distinct behaviour in terms of previous context. Overall we obtained up to 4.1 % relative WER improvement compared to our LSTM baseline, while reducing prediction network parameters by nearly an order of magnitude (8.4 times).