论文标题
从自动语音识别中识别语音情感的转移学习方法
A Transfer Learning Method for Speech Emotion Recognition from Automatic Speech Recognition
论文作者
论文摘要
本文介绍了基于时间播放神经网络(TDNN)体系结构的语音情感识别中的转移学习方法。当前基于语音的情绪检测研究的主要挑战是数据稀缺。提出的方法通过应用转移学习技术来解决此问题,以利用可用的大量数据的自动语音识别(ASR)任务中的数据。我们的实验还表明了通过采用标识矢量(I-vector)功能,除了标准的MEL频率Cepstral系数(MFCC)功能外,通过采用基于身份矢量(I-vector)的功能来表明说话者级适应建模技术的优势。[1]我们显示了转移学习模型在不预处理ASR的情况下显着优于其他方法。在可公开的Iemocap数据集上执行的实验,该数据集提供了12个小时的动作语音数据。通过使用TED-Lium V.2语音数据集初始化转移学习,该数据集提供207小时的音频,并具有相应的成绩单。与最新的交叉验证相比,与最新的交叉验证相比,我们达到了最高的准确性。仅使用语音,我们获得了71.7%的准确性,以获得愤怒,兴奋,悲伤和中立情绪内容。
This paper presents a transfer learning method in speech emotion recognition based on a Time-Delay Neural Network (TDNN) architecture. A major challenge in the current speech-based emotion detection research is data scarcity. The proposed method resolves this problem by applying transfer learning techniques in order to leverage data from the automatic speech recognition (ASR) task for which ample data is available. Our experiments also show the advantage of speaker-class adaptation modeling techniques by adopting identity-vector (i-vector) based features in addition to standard Mel-Frequency Cepstral Coefficient (MFCC) features.[1] We show the transfer learning models significantly outperform the other methods without pretraining on ASR. The experiments performed on the publicly available IEMOCAP dataset which provides 12 hours of motional speech data. The transfer learning was initialized by using the Ted-Lium v.2 speech dataset providing 207 hours of audio with the corresponding transcripts. We achieve the highest significantly higher accuracy when compared to state-of-the-art, using five-fold cross validation. Using only speech, we obtain an accuracy 71.7% for anger, excitement, sadness, and neutrality emotion content.