论文标题
揭开随机初始化的网络的神秘面纱,以评估生成模型
Demystifying Randomly Initialized Networks for Evaluating Generative Models
论文作者
论文摘要
生成模型的评估主要基于特定特征空间中估计分布和地面真相分布之间的比较。为了将样本嵌入信息丰富的特征中,以前的作品经常使用针对分类进行优化的卷积神经网络,这受到最近的研究批评。因此,已经探索了各种特征空间以发现替代方案。其中,一种令人惊讶的方法是使用随机初始化的神经网络进行特征嵌入。但是,采用随机特征的基本依据尚未足够合理。在本文中,我们严格研究了与训练有素的模型相比,具有随机权重的模型的特征空间。此外,我们提供了一个经验证据,可以选择网络以获取随机特征以获得一致可靠的结果。我们的结果表明,随机网络的功能可以与训练有素的网络相似,可以很好地评估生成模型,此外,这两种功能可以以互补的方式一起使用。
Evaluation of generative models is mostly based on the comparison between the estimated distribution and the ground truth distribution in a certain feature space. To embed samples into informative features, previous works often use convolutional neural networks optimized for classification, which is criticized by recent studies. Therefore, various feature spaces have been explored to discover alternatives. Among them, a surprising approach is to use a randomly initialized neural network for feature embedding. However, the fundamental basis to employ the random features has not been sufficiently justified. In this paper, we rigorously investigate the feature space of models with random weights in comparison to that of trained models. Furthermore, we provide an empirical evidence to choose networks for random features to obtain consistent and reliable results. Our results indicate that the features from random networks can evaluate generative models well similarly to those from trained networks, and furthermore, the two types of features can be used together in a complementary way.