论文标题
使用与Denoiser的联合对抗性调查对混合语音识别的对抗性攻击的防御
Defense against Adversarial Attacks on Hybrid Speech Recognition using Joint Adversarial Fine-tuning with Denoiser
论文作者
论文摘要
对抗攻击是对自动语音识别(ASR)系统的威胁,必须提出防御措施来保护它们。在本文中,我们执行实验,以表明K2构象构型混合动力ASR受到白盒对抗攻击的强烈影响。我们提出了三个防御工事 - 探测器前处理器,对抗微调的ASR模型以及ASR和Denoiser的对抗微调联合模型。我们的评估表明,Denoiser预处理器(在离线对抗示例中训练)无法防御自适应的白色盒子攻击。但是,使用Denoiser和ASR的串联模型对DeNoiser进行对抗进行微调,提供了更大的鲁棒性。我们评估了该防御的两个变体 - 模型的一个更新参数和第二个保存ASR冷冻。联合模型的平均绝对减少为19.3 \%地面真相(GT),参考基线针对快速梯度符号方法(FGSM)攻击具有不同的$ l_ \ infty $ norms。具有冷冻ASR参数的联合模型为预测梯度下降(PGD)提供了7次迭代的最佳防御,从而引用基线的平均绝对增加为22.3 \%GT;并用500次迭代的PGD反对PGD,平均绝对降低为45.08 \%gt,增加了68.05 \%的对抗目标。
Adversarial attacks are a threat to automatic speech recognition (ASR) systems, and it becomes imperative to propose defenses to protect them. In this paper, we perform experiments to show that K2 conformer hybrid ASR is strongly affected by white-box adversarial attacks. We propose three defenses--denoiser pre-processor, adversarially fine-tuning ASR model, and adversarially fine-tuning joint model of ASR and denoiser. Our evaluation shows denoiser pre-processor (trained on offline adversarial examples) fails to defend against adaptive white-box attacks. However, adversarially fine-tuning the denoiser using a tandem model of denoiser and ASR offers more robustness. We evaluate two variants of this defense--one updating parameters of both models and the second keeping ASR frozen. The joint model offers a mean absolute decrease of 19.3\% ground truth (GT) WER with reference to baseline against fast gradient sign method (FGSM) attacks with different $L_\infty$ norms. The joint model with frozen ASR parameters gives the best defense against projected gradient descent (PGD) with 7 iterations, yielding a mean absolute increase of 22.3\% GT WER with reference to baseline; and against PGD with 500 iterations, yielding a mean absolute decrease of 45.08\% GT WER and an increase of 68.05\% adversarial target WER.