论文标题
对峙:对抗性脸糊状
Face-Off: Adversarial Face Obfuscation
论文作者
论文摘要
深度学习的进步使面部识别技术普遍存在。该技术虽然对社交媒体平台和用户有用,但具有巨大的隐私威胁。再加上他们有关用户的丰富信息,服务提供商可以将用户与社交互动,访问的地点,活动和偏好相关联 - 其中一些用户可能不想共享。此外,各种机构使用的面部识别模型受社交媒体平台刮除的数据培训。现有的方法可以从不必要的面部识别中降低这些隐私风险,从而使用户的隐私限制权不平衡。在本文中,我们通过提出面对面的框架来解决这一权衡,这是一个隐私保护框架,它引入了对用户面孔的战略扰动,以防止其正确识别。为了实现对抗,我们克服了一系列与商业面部识别服务的黑箱性质有关的挑战,以及对公制网络的对抗性攻击的文学稀缺。我们实施并评估面对面,发现它欺骗了Microsoft,Amazon和Face ++的三项商业面部识别服务。我们与423名参与者的用户研究进一步表明,扰动对用户的成本可接受。
Advances in deep learning have made face recognition technologies pervasive. While useful to social media platforms and users, this technology carries significant privacy threats. Coupled with the abundant information they have about users, service providers can associate users with social interactions, visited places, activities, and preferences--some of which the user may not want to share. Additionally, facial recognition models used by various agencies are trained by data scraped from social media platforms. Existing approaches to mitigate these privacy risks from unwanted face recognition result in an imbalanced privacy-utility trade-off to users. In this paper, we address this trade-off by proposing Face-Off, a privacy-preserving framework that introduces strategic perturbations to the user's face to prevent it from being correctly recognized. To realize Face-Off, we overcome a set of challenges related to the black-box nature of commercial face recognition services, and the scarcity of literature for adversarial attacks on metric networks. We implement and evaluate Face-Off to find that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++. Our user study with 423 participants further shows that the perturbations come at an acceptable cost for the users.