论文标题

孟加拉国:利用N-Gram语言模型改善孟加拉国自动语音识别

Bangla-Wave: Improving Bangla Automatic Speech Recognition Utilizing N-gram Language Models

论文作者

Rakib, Mohammed, Hossain, Md. Ismail, Mohammed, Nabeel, Rahman, Fuad

论文摘要

尽管世界各地的300m都说孟加拉国,但由于孟加拉语是一种低资源的语言,在改善孟加拉语音到文本转录方面的工作很少。但是,随着孟加拉语Common Voice 9.0语音数据集的引入,现在可以显着改善自动语音识别(ASR)模型。有399小时的语音录音,孟加拉语Common Voice是世界上最大,最多样化的孟加拉语演讲语料库。在本文中,我们通过在公共语音数据集上验证了验证的WAV2VEC2模型来胜过SOTA验证的孟加拉ASR模型。我们还演示了如何通过添加作为后处理器的N-Gram语言模型来显着提高ASR模型的性能。最后,我们进行了一些实验和超参数调整,以生成比现有ASR模型更好的强大孟加拉ASR模型。

Although over 300M around the world speak Bangla, scant work has been done in improving Bangla voice-to-text transcription due to Bangla being a low-resource language. However, with the introduction of the Bengali Common Voice 9.0 speech dataset, Automatic Speech Recognition (ASR) models can now be significantly improved. With 399hrs of speech recordings, Bengali Common Voice is the largest and most diversified open-source Bengali speech corpus in the world. In this paper, we outperform the SOTA pretrained Bengali ASR models by finetuning a pretrained wav2vec2 model on the common voice dataset. We also demonstrate how to significantly improve the performance of an ASR model by adding an n-gram language model as a post-processor. Finally, we do some experiments and hyperparameter tuning to generate a robust Bangla ASR model that is better than the existing ASR models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源