论文标题
与自像素标准化的图像生成
Image Generation with Self Pixel-wise Normalization
论文作者
论文摘要
区域自适应归一化(RAN)方法已被广泛用于基于基于图像到图像图像的生成对抗网络(GAN)。但是,由于这些方法需要掩码图像来推断像素仿射转换参数,因此不能将它们应用于没有配对掩码图像的一般图像生成模型。为了解决此问题,本文提出了一种新的归一化方法,称为“自像素范围归一化”(SPN),该方法通过执行没有掩模图像的像素自适应仿射变换来有效地提高生成性能。在我们的方法中,转换参数来自将特征映射划分为前景和背景区域的自lentent面膜。自lit遮罩的可视化表明,SPN有效地捕获了一个要生成前景的对象。由于所提出的方法会产生无外部数据的自贴面膜,因此很容易适用于现有的生成模型。各种数据集上的广泛实验表明,所提出的方法可以显着改善图像生成技术的性能,从而在Frechet Inception距离(FID)和Inception评分(IS)方面(IS)。
Region-adaptive normalization (RAN) methods have been widely used in the generative adversarial network (GAN)-based image-to-image translation technique. However, since these approaches need a mask image to infer the pixel-wise affine transformation parameters, they cannot be applied to the general image generation models having no paired mask images. To resolve this problem, this paper presents a novel normalization method, called self pixel-wise normalization (SPN), which effectively boosts the generative performance by performing the pixel-adaptive affine transformation without the mask image. In our method, the transforming parameters are derived from a self-latent mask that divides the feature map into the foreground and background regions. The visualization of the self-latent masks shows that SPN effectively captures a single object to be generated as the foreground. Since the proposed method produces the self-latent mask without external data, it is easily applicable in the existing generative models. Extensive experiments on various datasets reveal that the proposed method significantly improves the performance of image generation technique in terms of Frechet inception distance (FID) and Inception score (IS).