论文标题

社交卫生:一种基于对抗性示例的社交图像的隐私技术

SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images

论文作者

Xue, Mingfu, Sun, Shichang, Wu, Zhiyu, He, Can, Wang, Jian, Liu, Weiqiang

论文摘要

各种社交平台的受欢迎程度促使更多的人在线共享他们的常规照片。但是,由于这种在线照片共享行为,发生了不良的隐私泄漏。基于先进的深神经网络(DNN)对象探测器可以轻松地窃取用户在共享照片中暴露的个人个人信息。在本文中,我们提出了一种针对基于对象探测器的隐私窃取的社交图像的基于对抗性示例的新颖示例。具体来说,我们开发了一种对象消失算法来制作两种对抗性社交形象。一个人可以将所有对象隐藏在社交图像中,以免被对象检测器检测到,而另一个可以使对象检测器错误地将自定义的敏感对象分类。对象消失算法在干净的社交形象上构建了扰动。注入扰动后,社交形象很容易欺骗对象探测器,而其视觉质量不会降低。我们使用两个指标,即保护隐私的成功率和隐私泄漏率,以评估所提出方法的有效性。实验结果表明,提出的方法可以有效地保护社交形象的隐私。 MS-Coco和Pascal VOC 2007数据集的拟议方法的隐私性成功率高达96.1%和99.3%,这两个数据集的隐私泄漏率分别低至0.57%和0.07%。此外,与现有的图像处理方法(低亮度,噪声,模糊,马赛克和JPEG压缩)相比,提出的方法可以在隐私保护和图像视觉质量维护方面取得更好的性能。

The popularity of various social platforms has prompted more people to share their routine photos online. However, undesirable privacy leakages occur due to such online photo sharing behaviors. Advanced deep neural network (DNN) based object detectors can easily steal users' personal information exposed in shared photos. In this paper, we propose a novel adversarial example based privacy-preserving technique for social images against object detectors based privacy stealing. Specifically, we develop an Object Disappearance Algorithm to craft two kinds of adversarial social images. One can hide all objects in the social images from being detected by an object detector, and the other can make the customized sensitive objects be incorrectly classified by the object detector. The Object Disappearance Algorithm constructs perturbation on a clean social image. After being injected with the perturbation, the social image can easily fool the object detector, while its visual quality will not be degraded. We use two metrics, privacy-preserving success rate and privacy leakage rate, to evaluate the effectiveness of the proposed method. Experimental results show that, the proposed method can effectively protect the privacy of social images. The privacy-preserving success rates of the proposed method on MS-COCO and PASCAL VOC 2007 datasets are high up to 96.1% and 99.3%, respectively, and the privacy leakage rates on these two datasets are as low as 0.57% and 0.07%, respectively. In addition, compared with existing image processing methods (low brightness, noise, blur, mosaic and JPEG compression), the proposed method can achieve much better performance in privacy protection and image visual quality maintenance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源