论文标题
现实的全身匿名化表面引导gan
Realistic Full-Body Anonymization with Surface-Guided GANs
论文作者
论文摘要
关于图像匿名化的最新工作表明,生成的对抗网络(GAN)可以为匿名个体生成近乎遗物的面孔。但是,将这些网络扩展到整个人体仍然是一项具有挑战性但尚未解决的任务。我们提出了一种新的匿名方法,该方法为野外图像生成现实的人。我们设计的一个关键部分是通过图像和规范3D表面之间的密集像素到表面的对应量引导对抗网。我们介绍了嵌入整个发电机的表面信息的变异表面自适应调制(V-SAM)。将其与我们新颖的歧视者表面监督损失相结合,发电机可以在复杂和不同的场景中综合具有不同外观的高质量人类。我们证明,表面指导可显着提高样品的图像质量和多样性,从而产生高度实用的发电机。最后,我们表明我们的方法在收集培训计算机视觉模型的图像数据集时可以保留数据可用性而无需侵犯隐私。源代码和附录可在:\ href {https://github.com/hukkelas/full_body_anonyminization} {github.com/hukkelas/fullture_body \_anonymanization} {github.com/hukkelas/
Recent work on image anonymization has shown that generative adversarial networks (GANs) can generate near-photorealistic faces to anonymize individuals. However, scaling up these networks to the entire human body has remained a challenging and yet unsolved task. We propose a new anonymization method that generates realistic humans for in-the-wild images. A key part of our design is to guide adversarial nets by dense pixel-to-surface correspondences between an image and a canonical 3D surface. We introduce Variational Surface-Adaptive Modulation (V-SAM) that embeds surface information throughout the generator. Combining this with our novel discriminator surface supervision loss, the generator can synthesize high quality humans with diverse appearances in complex and varying scenes. We demonstrate that surface guidance significantly improves image quality and diversity of samples, yielding a highly practical generator. Finally, we show that our method preserves data usability without infringing privacy when collecting image datasets for training computer vision models. Source code and appendix is available at: \href{https://github.com/hukkelas/full_body_anonymization}{github.com/hukkelas/full\_body\_anonymization}