论文标题

使用基于多种图形发音的嵌入嵌入匹配匹配的声学对单词ASR的改进

Improvements to Embedding-Matching Acoustic-to-Word ASR Using Multiple-Hypothesis Pronunciation-Based Embeddings

论文作者

Yen, Hao, Jeon, Woojay

论文摘要

在嵌入匹配的声学到词(A2W)ASR时,词汇中的每个单词都由固定尺寸嵌入向量表示,可以与系统其余部分独立添加或删除。该方法可能是用于动态播出量(OOV)单词问题的优雅解决方案,其中说话者和上下文依赖的命名实体(例如接触名称)必须在测试时间的每个语音说话中纳入ASR。但是,在提高嵌入匹配A2W的整体准确性方面仍然存在挑战。在本文中,我们贡献了两种方法,以提高嵌入匹配匹配A2W的准确性。首先,我们在每个实例中都提出内部产生多个嵌入,而不是单个嵌入,这允许A2W模型在音频中的多个时间段上提出一组更丰富的假设。其次,我们建议使用单词发音嵌入而不是单词拼字法嵌入,以减少由多个声音的单词引入的歧义。我们表明,在动态OOV单词起着至关重要的作用的情况下,上面的想法具有相同的训练数据和几乎相同的模型大小,可以提高准确的准确性。在一个基于语音的数字助手的查询数据集中,其中包括许多与用户有关的触点名称,我们使用拟议的改进观察到单词错误率下降了多达18%。

In embedding-matching acoustic-to-word (A2W) ASR, every word in the vocabulary is represented by a fixed-dimension embedding vector that can be added or removed independently of the rest of the system. The approach is potentially an elegant solution for the dynamic out-of-vocabulary (OOV) words problem, where speaker- and context-dependent named entities like contact names must be incorporated into the ASR on-the-fly for every speech utterance at testing time. Challenges still remain, however, in improving the overall accuracy of embedding-matching A2W. In this paper, we contribute two methods that improve the accuracy of embedding-matching A2W. First, we propose internally producing multiple embeddings, instead of a single embedding, at each instance in time, which allows the A2W model to propose a richer set of hypotheses over multiple time segments in the audio. Second, we propose using word pronunciation embeddings rather than word orthography embeddings to reduce ambiguities introduced by words that have more than one sound. We show that the above ideas give significant accuracy improvement, with the same training data and nearly identical model size, in scenarios where dynamic OOV words play a crucial role. On a dataset of queries to a speech-based digital assistant that include many user-dependent contact names, we observe up to 18% decrease in word error rate using the proposed improvements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源