论文标题

InfoScrub:通过有针对性的混淆来归因隐私

InfoScrub: Towards Attribute Privacy by Targeted Obfuscation

论文作者

Wang, Hui-Po, Orekondy, Tribhuvanesh, Fritz, Mario

论文摘要

除了展示无数令人难忘的细节外,在线共享时,个人的个人照片还揭示了广泛的私人信息,并可能带来隐私风险(例如,在线骚扰,跟踪)。为了减轻这种风险,研究允许个人限制视觉数据中私人信息的技术至关重要。我们在新的图像混淆框架中解决了这个问题:在保留图像保真度的同时,对目标隐私属性的推论最大化。我们基于编码器decoder样式体系结构来解决问题,并具有两个关键的新颖性:(a)引入一个歧视器,以同时从多个未配对的域中同时执行双向翻译; (b)预测图像插值,该图像插值最大化了目标属性集的不确定性。我们发现我们的方法生成了忠实于原始输入图像的混淆图像,并在非弹性的配料上将不确定性增加6.2 $ \ times $(或最高0.85位)。

Personal photos of individuals when shared online, apart from exhibiting a myriad of memorable details, also reveals a wide range of private information and potentially entails privacy risks (e.g., online harassment, tracking). To mitigate such risks, it is crucial to study techniques that allow individuals to limit the private information leaked in visual data. We tackle this problem in a novel image obfuscation framework: to maximize entropy on inferences over targeted privacy attributes, while retaining image fidelity. We approach the problem based on an encoder-decoder style architecture, with two key novelties: (a) introducing a discriminator to perform bi-directional translation simultaneously from multiple unpaired domains; (b) predicting an image interpolation which maximizes uncertainty over a target set of attributes. We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$\times$ (or up to 0.85 bits) over the non-obfuscated counterparts.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源