论文标题
光命令:基于激光的音频注入对语音控制系统的攻击
Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems
论文作者
论文摘要
我们通过将光实际转换为声音提出了一类新的信号注入攻击对麦克风的攻击。我们展示了攻击者如何通过将振幅调节的光瞄准麦克风的光圈将任意音频信号注入目标麦克风。然后,我们继续展示这种效果如何导致对语音控制系统的远程语音命令注入攻击。检查了使用亚马逊的Alexa,Apple的Siri,Facebook's Portal和Google Assistant的各种产品,我们展示了如何使用光线来控制这些设备的控制,最多可达110米,并从两座独立的建筑物中获得对这些设备的控制。接下来,我们表明这些设备上的用户身份验证通常缺乏,从而使攻击者可以使用注射灯的语音命令来解锁目标的SmartLock保护的前门,敞开的车库门,以目标为代价在电子商务网站上购物,甚至可以解锁,甚至可以解锁和启动与Target的Google帐户连接的各种工具(例如,TEAGE的各种车辆(例如,Tesla and Tesla and Tesla and Tesla and Tesla and ford ford)。最后,我们以可能的软件和硬件防御措施来防止我们的攻击。
We propose a new class of signal injection attacks on microphones by physically converting light to sound. We show how an attacker can inject arbitrary audio signals to a target microphone by aiming an amplitude-modulated light at the microphone's aperture. We then proceed to show how this effect leads to a remote voice-command injection attack on voice-controllable systems. Examining various products that use Amazon's Alexa, Apple's Siri, Facebook's Portal, and Google Assistant, we show how to use light to obtain control over these devices at distances up to 110 meters and from two separate buildings. Next, we show that user authentication on these devices is often lacking, allowing the attacker to use light-injected voice commands to unlock the target's smartlock-protected front doors, open garage doors, shop on e-commerce websites at the target's expense, or even unlock and start various vehicles connected to the target's Google account (e.g., Tesla and Ford). Finally, we conclude with possible software and hardware defenses against our attacks.