论文标题

通过删除表示形式保护隐私的语音分析

Privacy-preserving Voice Analysis via Disentangled Representations

论文作者

Aloufi, Ranya, Haddadi, Hamed, Boyle, David

论文摘要

语音用户界面(VUI)越来越受欢迎,并内置在智能手机,家庭助理和物联网(IoT)设备中。尽管提供了始终在启用的方便用户体验,但VUI为其用户提出了新的安全性和隐私问题。在本文中,我们专注于语音域中的属性推理攻击,证明了攻击者准确地推断出目标用户的敏感和私人属性(例如他们的情感,性别或健康状况)的潜力。为了防御此类攻击,我们设计,实施和评估了一个可与语音相关数据共享机制优化的可用户配置,隐私感知的框架。我们的目标是启用主要任务,例如语音识别和用户识别,同时在与云服务提供商共享原始语音数据中删除敏感属性。我们利用分开的表示学习来明确学习原始数据中的独立因素。根据用户的偏好,监督信号将过滤从不变因素中滤出,同时保留了所选偏好中反映的因素。我们对五个数据集进行的实验评估表明,提出的框架可以通过将其成功率降低到随机猜测的情况下,有效地抵御属性推理攻击,同时将其准确性超过99%,以超过99%。我们得出的结论是,通过散布的表示形式启用了可转让的隐私设置,可以为保护隐私应用程序带来新的机会。

Voice User Interfaces (VUIs) are increasingly popular and built into smartphones, home assistants, and Internet of Things (IoT) devices. Despite offering an always-on convenient user experience, VUIs raise new security and privacy concerns for their users. In this paper, we focus on attribute inference attacks in the speech domain, demonstrating the potential for an attacker to accurately infer a target user's sensitive and private attributes (e.g. their emotion, sex, or health status) from deep acoustic models. To defend against this class of attacks, we design, implement, and evaluate a user-configurable, privacy-aware framework for optimizing speech-related data sharing mechanisms. Our objective is to enable primary tasks such as speech recognition and user identification, while removing sensitive attributes in the raw speech data before sharing it with a cloud service provider. We leverage disentangled representation learning to explicitly learn independent factors in the raw data. Based on a user's preferences, a supervision signal informs the filtering out of invariant factors while retaining the factors reflected in the selected preference. Our experimental evaluation over five datasets shows that the proposed framework can effectively defend against attribute inference attacks by reducing their success rates to approximately that of guessing at random, while maintaining accuracy in excess of 99% for the tasks of interest. We conclude that negotiable privacy settings enabled by disentangled representations can bring new opportunities for privacy-preserving applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源