论文标题
使用辅助歧视器的远距离传播生成对抗网络进行面对面的翻译
Face-to-Music Translation Using a Distance-Preserving Generative Adversarial Network with an Auxiliary Discriminator
论文作者
论文摘要
在没有任何监督的情况下学习两个无关域之间的映射,例如图像和音频,这是一项具有挑战性的任务。在这项工作中,我们提出了一个保护距离的生成对抗模型,以将人脸的图像转化为音频域。音频域的定义是由10个不同的乐器家族(nsynth \ cite {nsynth2017})录制的音乐音符声音的集合和距离指标,其中仪器系列班级信息与梅尔 - 频率频率Cepstral系数(MFCC)合并在一起。为了强制执行距离保护,使用了损失项,该损失术语使用面部成对距离和翻译的音频样本之间的差异。此外,我们发现,生成对抗模型中的距离保存约束会导致翻译的音频样品的多样性降低,并提出使用辅助判别器来增强翻译的多样性,同时使用距离保存约束。我们还提供了对翻译忠诚度的结果和数值分析的视觉证明。 https://www.dropbox.com/s/the176w9obq8465/face_to_to_to_musical_note.mov?dl = 0,可以在我们提出的模型学习翻译的视频演示中获得。
Learning a mapping between two unrelated domains-such as image and audio, without any supervision is a challenging task. In this work, we propose a distance-preserving generative adversarial model to translate images of human faces into an audio domain. The audio domain is defined by a collection of musical note sounds recorded by 10 different instrument families (NSynth \cite{nsynth2017}) and a distance metric where the instrument family class information is incorporated together with a mel-frequency cepstral coefficients (MFCCs) feature. To enforce distance-preservation, a loss term that penalizes difference between pairwise distances of the faces and the translated audio samples is used. Further, we discover that the distance preservation constraint in the generative adversarial model leads to reduced diversity in the translated audio samples, and propose the use of an auxiliary discriminator to enhance the diversity of the translations while using the distance preservation constraint. We also provide a visual demonstration of the results and numerical analysis of the fidelity of the translations. A video demo of our proposed model's learned translation is available in https://www.dropbox.com/s/the176w9obq8465/face_to_musical_note.mov?dl=0.