论文标题

对对抗音频分类的渠道功能的自适应重新校准

Adaptive re-calibration of channel-wise features for Adversarial Audio Classification

论文作者

Dongre, Vardhan, Reddy, Abhinav Thimma, Reddeddy, Nikhitha

论文摘要

与DeepFake图像和视频不同,DeepFake音频从检测的角度探讨了相对较少的探索,而用于合成语音分类的解决方案要么使用复杂的网络,要么不推广到使用基于不同生成和优化方法获得的不同合成语音品种。通过这项工作,我们提出了使用注意力融合进行合成语音检测的渠道重新校准,并将其性能与不同的检测方法进行比较,包括End2End模型和基于Resnet的模型,这些模型是对使用文本和Vocoder系统(如WaveNet,Wavernn,Wavernn,Wavernn,cattotron和waveglow)产生的合成语音的。我们还尝试了Resnet模型中的挤压激发(SE)块,并发现该组合能够获得更好的性能。除了分析外,我们还证明了线性频率曲线系数(LFCC)和MEL频率CEPSTRAL系数(MFCC)使用注意特征融合技术可以创建更好的输入特征表示,从而可以帮助更简单的模型在合成语音分类任务上良好地推广。我们的模型(基于功能融合的重新连接)在伪造或真实(FOR)数据集进行了培训,并能够使用for数据实现95%的测试准确性,并且在适应此框架后使用不同的生成模型生成的样品平均使用了90%的精度。

DeepFake Audio, unlike DeepFake images and videos, has been relatively less explored from detection perspective, and the solutions which exist for the synthetic speech classification either use complex networks or dont generalize to different varieties of synthetic speech obtained using different generative and optimization-based methods. Through this work, we propose a channel-wise recalibration of features using attention feature fusion for synthetic speech detection and compare its performance against different detection methods including End2End models and Resnet-based models on synthetic speech generated using Text to Speech and Vocoder systems like WaveNet, WaveRNN, Tactotron, and WaveGlow. We also experiment with Squeeze Excitation (SE) blocks in our Resnet models and found that the combination was able to get better performance. In addition to the analysis, we also demonstrate that the combination of Linear frequency cepstral coefficients (LFCC) and Mel Frequency cepstral coefficients (MFCC) using the attentional feature fusion technique creates better input features representations which can help even simpler models generalize well on synthetic speech classification tasks. Our models (Resnet based using feature fusion) trained on Fake or Real (FoR) dataset and were able to achieve 95% test accuracy with the FoR data, and an average of 90% accuracy with samples we generated using different generative models after adapting this framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源