论文标题
多对多语音转换中的口音和说话者删除
Accent and Speaker Disentanglement in Many-to-many Voice Conversion
论文作者
论文摘要
本文提出了一种有趣的声音和重音联合转换方法,可以将任意源扬声器的声音转换为具有非本地口音的目标扬声器。这个问题具有挑战性,因为每个目标发言人只有本机口音的培训数据,我们需要在转换模型培训中解散口音和说话者信息,并在转换阶段重新结合它们。在我们的识别合成转换框架中,我们设法通过两个提出的技巧来解决这个问题。首先,我们使用重音依赖的语音识别器来获得不同重音扬声器的瓶颈功能。这旨在消除BN功能中的语言信息以外的其他因素进行转换模型培训。其次,我们建议使用对抗性训练在基于编码器的转换模型中更好地删除说话者和口音信息。具体来说,我们将辅助扬声器分类器插入编码器,并接受了对抗性损失的训练,以从编码器输出中擦除扬声器信息。实验表明,我们的方法优于基线。拟议的技巧在改善重音和音频质量方面非常有效,并且可以很好地维护说话者的相似性。
This paper proposes an interesting voice and accent joint conversion approach, which can convert an arbitrary source speaker's voice to a target speaker with non-native accent. This problem is challenging as each target speaker only has training data in native accent and we need to disentangle accent and speaker information in the conversion model training and re-combine them in the conversion stage. In our recognition-synthesis conversion framework, we manage to solve this problem by two proposed tricks. First, we use accent-dependent speech recognizers to obtain bottleneck features for different accented speakers. This aims to wipe out other factors beyond the linguistic information in the BN features for conversion model training. Second, we propose to use adversarial training to better disentangle the speaker and accent information in our encoder-decoder based conversion model. Specifically, we plug an auxiliary speaker classifier to the encoder, trained with an adversarial loss to wipe out speaker information from the encoder output. Experiments show that our approach is superior to the baseline. The proposed tricks are quite effective in improving accentedness and audio quality and speaker similarity are well maintained.