论文标题
我们之间的管:对自动扬声器识别的模拟攻击
Tubes Among Us: Analog Attack on Automatic Speaker Identification
论文作者
论文摘要
近年来,由机器学习提供动力的支持声学的个人设备的普及激增。然而,事实证明,机器学习容易受到对抗性例子的影响。许多现代系统通过针对人造性来保护自己免受此类攻击,即,它们部署了在产生对抗性例子中缺乏人类参与的机制。但是,这些防御能力隐含地假设人类无法产生有意义和有针对性的对抗性例子。在本文中,我们表明这个基本假设是错误的。特别是,我们证明,对于诸如说话者识别之类的任务,人类能够直接产生模拟对抗性示例,几乎没有成本和监督:通过简单地通过管道说话,对手可靠地在ML模型的眼中可靠地模仿说话者身份的其他扬声器。我们的发现扩展到了一系列其他声学测量任务,例如Livices检测,对它们在现实生活中的关键性环境(例如电话银行业务)中的使用引起了质疑。
Recent years have seen a surge in the popularity of acoustics-enabled personal devices powered by machine learning. Yet, machine learning has proven to be vulnerable to adversarial examples. A large number of modern systems protect themselves against such attacks by targeting artificiality, i.e., they deploy mechanisms to detect the lack of human involvement in generating the adversarial examples. However, these defenses implicitly assume that humans are incapable of producing meaningful and targeted adversarial examples. In this paper, we show that this base assumption is wrong. In particular, we demonstrate that for tasks like speaker identification, a human is capable of producing analog adversarial examples directly with little cost and supervision: by simply speaking through a tube, an adversary reliably impersonates other speakers in eyes of ML models for speaker identification. Our findings extend to a range of other acoustic-biometric tasks such as liveness detection, bringing into question their use in security-critical settings in real life, such as phone banking.