论文标题

面对对抗性攻击,ASR模型的最新改进

Recent improvements of ASR models in the face of adversarial attacks

论文作者

Olivier, Raphael, Raj, Bhiksha

论文摘要

像许多涉及神经网络的任务一样,语音识别模型容易受到对抗性攻击的影响。然而,最近的研究指出了与图像模型相比,ASR模型的攻击和防御措施之间的差异。改善ASR模型的鲁棒性需要从评估对一个或几种模型的攻击转变为评估的系统方法。我们通过评估各种体系结构的一组对抗性攻击来为此类研究奠定基础:有针对性的和不固定的,优化和基于语音处理的基于白色框,Black-Box,Black-Box和目标攻击。我们的结果表明,不同攻击算法的相对优势在更改模型体系结构时差异很大,并且某些攻击的结果不应盲目信任。他们还表明,诸如自我监督预处理之类的训练选择可以通过实现可转移的扰动来显着影响鲁棒性。我们将我们的源代码作为一个软件包发布,该软件包应有助于将来的研究评估其攻击和防御措施。

Like many other tasks involving neural networks, Speech Recognition models are vulnerable to adversarial attacks. However recent research has pointed out differences between attacks and defenses on ASR models compared to image models. Improving the robustness of ASR models requires a paradigm shift from evaluating attacks on one or a few models to a systemic approach in evaluation. We lay the ground for such research by evaluating on various architectures a representative set of adversarial attacks: targeted and untargeted, optimization and speech processing-based, white-box, black-box and targeted attacks. Our results show that the relative strengths of different attack algorithms vary considerably when changing the model architecture, and that the results of some attacks are not to be blindly trusted. They also indicate that training choices such as self-supervised pretraining can significantly impact robustness by enabling transferable perturbations. We release our source code as a package that should help future research in evaluating their attacks and defenses.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源