论文标题
学龄前儿童语言开发测试中的非单词发音分类
Nonwords Pronunciation Classification in Language Development Tests for Preschool Children
论文作者
论文摘要
这项工作旨在自动评估儿童的语言发展是否适合年龄。经过验证的语音和语言测试用于此目的测试听觉记忆。在这项工作中,任务是确定是否正确说出了口语非单词。我们比较有动机来对特定语言结构进行建模的不同方法:低水平特征(FFT),扬声器嵌入(ECAPA-TDNN),素化 - 动机的嵌入(WAV2VEC 2.0)和语音嵌入Senones的形式(ASR ASR ACOSTIC模型)。每种方法都提供了类似VGG的5层CNN分类器的输入。我们还检查了每个非单词的适应。对所提出的系统的评估是使用来自口头非单词的不同幼儿园的录音进行的。 ECAPA-TDNN和低级FFT特征不能明确模型语音信息; WAV2VEC2.0是在素标签上训练的,我们的ASR声学模型包含(子)语音信息。我们发现,语音建模越颗粒状,达到的识别率就越高。在ASR声学模型特征上训练的最佳系统的精度为89.4%,在ROC(接收器操作特征)曲线(AUC)下的面积为0.923。与FFT-BASELINE相比,这对应于20.2%和AUC相对0.309的改善。
This work aims to automatically evaluate whether the language development of children is age-appropriate. Validated speech and language tests are used for this purpose to test the auditory memory. In this work, the task is to determine whether spoken nonwords have been uttered correctly. We compare different approaches that are motivated to model specific language structures: Low-level features (FFT), speaker embeddings (ECAPA-TDNN), grapheme-motivated embeddings (wav2vec 2.0), and phonetic embeddings in form of senones (ASR acoustic model). Each of the approaches provides input for VGG-like 5-layer CNN classifiers. We also examine the adaptation per nonword. The evaluation of the proposed systems was performed using recordings from different kindergartens of spoken nonwords. ECAPA-TDNN and low-level FFT features do not explicitly model phonetic information; wav2vec2.0 is trained on grapheme labels, our ASR acoustic model features contain (sub-)phonetic information. We found that the more granular the phonetic modeling is, the higher are the achieved recognition rates. The best system trained on ASR acoustic model features with VTLN achieved an accuracy of 89.4% and an area under the ROC (Receiver Operating Characteristic) curve (AUC) of 0.923. This corresponds to an improvement in accuracy of 20.2% and AUC of 0.309 relative compared to the FFT-baseline.