论文标题

细心的激活功能,用于改善端到端欺骗对策系统

Attentive activation function for improving end-to-end spoofing countermeasure systems

论文作者

Kang, Woo Hyun, Alam, Jahangir, Fathan, Abderrahim

论文摘要

欺骗对策系统的主要目的是检测由语音综合或语音转换过程引起的输入语音中的伪影。为了实现这一目标,我们建议采用一个细心的激活函数,更具体地说,更具体地说,将纠正线性单元(ARELU)用于端到端欺骗对策系统。由于Arelu采用注意力机制来增强相关输入特征的贡献,同时抑制无关紧要的功能,因此引入Arelu可以帮助对策系统专注于与文物相关的特征。对ASVSPOOF2019数据集的逻辑访问(LA)任务进行了实验,并使用标准的非可学习激活功能优于系统。

The main objective of the spoofing countermeasure system is to detect the artifacts within the input speech caused by the speech synthesis or voice conversion process. In order to achieve this, we propose to adopt an attentive activation function, more specifically attention rectified linear unit (AReLU) to the end-to-end spoofing countermeasure system. Since the AReLU employs the attention mechanism to boost the contribution of relevant input features while suppressing the irrelevant ones, introducing AReLU can help the countermeasure system to focus on the features related to the artifacts. The proposed framework was experimented on the logical access (LA) task of ASVSpoof2019 dataset, and outperformed the systems using the standard non-learnable activation functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源