论文标题
Introvac:用于学习可解释潜在子空间的内省变分分类器
IntroVAC: Introspective Variational Classifiers for Learning Interpretable Latent Subspaces
论文作者
论文摘要
多年来,学习复杂数据的有用表示已成为广泛研究的主题。随着深层神经网络的扩散,变异自动编码器引起了很多关注,因为它们基于编码器/解码器体系结构提供了明确的数据分布模型,该模型能够在低维子空间中生成图像并将其编码。但是,潜在空间不容易解释,并且发电能力显示出一些局限性,因为图像通常看起来模糊并且缺乏细节。在本文中,我们提出了内省的变异分类器(Introvac),该模型通过从附加标签中利用信息来学习可解释的潜在子空间,并通过对抗性训练策略提供了改进的图像质量。我们表明,Introvac能够在潜在空间中学习有意义的指导,从而实现对图像属性的精细操纵的潜在空间。我们在Celeba数据集上验证我们的方法。
Learning useful representations of complex data has been the subject of extensive research for many years. With the diffusion of Deep Neural Networks, Variational Autoencoders have gained lots of attention since they provide an explicit model of the data distribution based on an encoder/decoder architecture which is able to both generate images and encode them in a low-dimensional subspace. However, the latent space is not easily interpretable and the generation capabilities show some limitations since images typically look blurry and lack details. In this paper, we propose the Introspective Variational Classifier (IntroVAC), a model that learns interpretable latent subspaces by exploiting information from an additional label and provides improved image quality thanks to an adversarial training strategy.We show that IntroVAC is able to learn meaningful directions in the latent space enabling fine-grained manipulation of image attributes. We validate our approach on the CelebA dataset.