论文标题
变老的算法:生成深度神经网络的情况
Algorithms that get old : the case of generative deep neural networks
论文作者
论文摘要
在机器学习中使用的生成深度神经网络,例如各种自动编码器(VAE)和生成的对抗网络(GAN),每次要求使用新对象仍然类似于给出的示例列表类似的示例时,每次都会每次都会产生新对象。但是,这种行为与人类艺术家的行为不同,后者随着时间的流逝而改变自己的风格,很少回到初始作品的风格。 我们研究了一种情况,即使用VAE从某些经验数据集描述的概率度量中进行采样。 基于有关ra骨统计距离的最新作品,我们提出了一个数值范式,将与生成算法结合使用,该算法满足以下两个要求:创建的对象不重复并演变以填充整个目标概率分布。
Generative deep neural networks used in machine learning, like the Variational Auto-Encoders (VAE), and Generative Adversarial Networks (GANs) produce new objects each time when asked to do so with the constraint that the new objects remain similar to some list of examples given as input. However, this behavior is unlike that of human artists that change their style as time goes by and seldom return to the style of the initial creations. We investigate a situation where VAEs are used to sample from a probability measure described by some empirical dataset. Based on recent works on Radon-Sobolev statistical distances, we propose a numerical paradigm, to be used in conjunction with a generative algorithm, that satisfies the two following requirements: the objects created do not repeat and evolve to fill the entire target probability distribution.