论文标题

基于得分融合

Deepfake Detection System for the ADD Challenge Track 3.2 Based on Score Fusion

论文作者

Zhang, Yuxiang, Lu, Jingze, Wang, Xingming, Li, Zhuo, Xiao, Runqiu, Wang, Wenchao, Li, Ming, Zhang, Pengyuan

论文摘要

本文介绍了提交到音频深综合检测(ADD)挑战轨道3.2的DeepFake音频检测系统,并对得分融合进行了分析。提出的系统是基于几个光卷积神经网络(LCNN)模型的得分级融合。各种前端用作输入特征,包括低频短时傅立叶变换和恒定Q变换。由于复杂的噪声和丰富的合成算法,很难直接使用训练集获得所需的性能。在线数据增强方法有效地改善了假音频检测系统的鲁棒性。特别是,通过可视化得分分布并与另一个数据集中的分数分布进行比较来探索得分融合不佳的原因。对训练集的模型过度拟合会导致分数的极端值和得分分布的低相关性,这使得分数融合变得困难。与部分假音频检测系统的融合可以进一步提高系统性能。轨道3.2上的提交获得了加权等值率(WEER)为11.04 \%,这是挑战中最佳性能系统之一。

This paper describes the deepfake audio detection system submitted to the Audio Deep Synthesis Detection (ADD) Challenge Track 3.2 and gives an analysis of score fusion. The proposed system is a score-level fusion of several light convolutional neural network (LCNN) based models. Various front-ends are used as input features, including low-frequency short-time Fourier transform and Constant Q transform. Due to the complex noise and rich synthesis algorithms, it is difficult to obtain the desired performance using the training set directly. Online data augmentation methods effectively improve the robustness of fake audio detection systems. In particular, the reasons for the poor improvement of score fusion are explored through visualization of the score distributions and comparison with score distribution on another dataset. The overfitting of the model to the training set leads to extreme values of the scores and low correlation of the score distributions, which makes score fusion difficult. Fusion with partially fake audio detection system improves system performance further. The submission on track 3.2 obtained the weighted equal error rate (WEER) of 11.04\%, which is one of the best performing systems in the challenge.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源