论文标题
ASVSPOOF 2021:在野外进行欺骗和深击语音检测
ASVspoof 2021: Towards Spoofed and Deepfake Speech Detection in the Wild
论文作者
论文摘要
基准测试计划支持将竞争解决方案与语音和语言处理中的突出问题进行有意义的比较。连续的基准评估通常反映出从理想实验室到野外遇到的条件的逐步演变。 ASVSPOOF是欺骗和深泡检测计划和挑战系列,也遵循了相同的趋势。本文概述了ASVSPOOF 2021挑战以及提交评估阶段的54个参与团队的结果。对于逻辑访问(LA)任务,结果表明对策对于新引入的编码和传输效果是可靠的。物理访问(PA)任务的结果表明,与模拟物理空间相比,实际检测重播攻击的可能性,但是对模拟和真实声学环境之间的变化缺乏稳健性。 DeepFake(DF)任务是2021年版的新任务,它针对检测在线发布的操纵,压缩语音数据的解决方案。虽然检测解决方案对压缩效应具有一些韧性,但它们缺乏在不同源数据集的概括。除了对每个任务的表现最佳系统的摘要外,对隐藏数据子集有影响力的数据因素和结果的新分析还包括对挑战后结果的审查,主要挑战限制的概述以及ASVSPOOF未来的路线图。
Benchmarking initiatives support the meaningful comparison of competing solutions to prominent problems in speech and language processing. Successive benchmarking evaluations typically reflect a progressive evolution from ideal lab conditions towards to those encountered in the wild. ASVspoof, the spoofing and deepfake detection initiative and challenge series, has followed the same trend. This article provides a summary of the ASVspoof 2021 challenge and the results of 54 participating teams that submitted to the evaluation phase. For the logical access (LA) task, results indicate that countermeasures are robust to newly introduced encoding and transmission effects. Results for the physical access (PA) task indicate the potential to detect replay attacks in real, as opposed to simulated physical spaces, but a lack of robustness to variations between simulated and real acoustic environments. The Deepfake (DF) task, new to the 2021 edition, targets solutions to the detection of manipulated, compressed speech data posted online. While detection solutions offer some resilience to compression effects, they lack generalization across different source datasets. In addition to a summary of the top-performing systems for each task, new analyses of influential data factors and results for hidden data subsets, the article includes a review of post-challenge results, an outline of the principal challenge limitations and a road-map for the future of ASVspoof.