论文标题
Iemotts:基于韵律和音色之间的分离,朝着强大的跨言情感转移和控制语音综合的控制
iEmoTTS: Toward Robust Cross-Speaker Emotion Transfer and Control for Speech Synthesis based on Disentanglement between Prosody and Timbre
论文作者
论文摘要
对于人类计算机相互作用的许多应用,需要具有特定类型的情感类型的语音的能力。当无法用于模型培训的目标扬声器的情感标签时,跨言言情感转移是产生情感语音的一种常见方法。本文介绍了一种新颖的跨言情感转移系统,名为Iemotts。该系统由情感编码器,韵律预测指标和音色编码器组成。情感编码器提取了情感类型的身份,以及来自输入语音的旋光图的各个情感强度。情绪强度是通过输入话语带来的情感的后验概率来衡量的。韵律预测因子用于为情感传递提供韵律特征。木材编码器为系统提供与音色相关的信息。与许多其他侧重于脱离扬声器和言语风格因素的研究不同,Iemotts旨在通过韵律和音色之间的分离来实现跨言情感的转移。韵律被认为是与情绪相关的语音特征的主要载体,而音色是说话者识别的基本特征。零拍情感转移,这意味着在模型培训中没有看到目标说话者的语音,也可以通过Iemotts实现。已经进行了主观评估的广泛实验。与其他最近提出的跨言情感转移系统相比,结果证明了Iemotts的有效性。结果表明,Iemotts可以用指定的情绪类型和可控制的情绪强度产生语音。凭借适当的信息瓶颈能力,Iemotts能够有效地将情感信息传递给新的扬声器。音频样本可公开使用https://patrick-g-zhang.github.io/iemotts/
The capability of generating speech with specific type of emotion is desired for many applications of human-computer interaction. Cross-speaker emotion transfer is a common approach to generating emotional speech when speech with emotion labels from target speakers is not available for model training. This paper presents a novel cross-speaker emotion transfer system, named iEmoTTS. The system is composed of an emotion encoder, a prosody predictor, and a timbre encoder. The emotion encoder extracts the identity of emotion type as well as the respective emotion intensity from the mel-spectrogram of input speech. The emotion intensity is measured by the posterior probability that the input utterance carries that emotion. The prosody predictor is used to provide prosodic features for emotion transfer. The timber encoder provides timbre-related information for the system. Unlike many other studies which focus on disentangling speaker and style factors of speech, the iEmoTTS is designed to achieve cross-speaker emotion transfer via disentanglement between prosody and timbre. Prosody is considered as the main carrier of emotion-related speech characteristics and timbre accounts for the essential characteristics for speaker identification. Zero-shot emotion transfer, meaning that speech of target speakers are not seen in model training, is also realized with iEmoTTS. Extensive experiments of subjective evaluation have been carried out. The results demonstrate the effectiveness of iEmoTTS as compared with other recently proposed systems of cross-speaker emotion transfer. It is shown that iEmoTTS can produce speech with designated emotion type and controllable emotion intensity. With appropriate information bottleneck capacity, iEmoTTS is able to effectively transfer emotion information to a new speaker. Audio samples are publicly available https://patrick-g-zhang.github.io/iemotts/