论文标题

生成对抗网络的正则化方法:最新研究的概述

Regularization Methods for Generative Adversarial Networks: An Overview of Recent Studies

论文作者

Lee, Minhyeok, Seok, Junhee

论文摘要

尽管历史悠久,但生成的对抗网络(GAN)已被广泛研究并用于各种任务,包括其最初的目的,即合成样本的产生。但是,将GAN应用于具有不同神经网络架构的不同数据类型的训练中的限制阻碍了该模型很容易分歧。如此臭名昭著的对甘斯的培训是众所周知的,并且在许多研究中都得到了解决。因此,为了进行GAN稳定的培训,近年来已经提出了许多正则化方法。本文回顾了最近引入的正则化方法,其中大多数已在过去三年中发表。具体而言,我们专注于通常使用的通用方法,无论神经网络体系结构如何。为了探索gan正规化的最新研究趋势,这些方法通过其操作原则分为几个组,并分析了方法之间的差异。此外,为了提供有关使用这些方法的实用知识,我们研究了经常在最先进的gan中使用的流行方法。此外,我们讨论了现有方法的局限性,并提出了未来的研究方向。

Despite its short history, Generative Adversarial Network (GAN) has been extensively studied and used for various tasks, including its original purpose, i.e., synthetic sample generation. However, applying GAN to different data types with diverse neural network architectures has been hindered by its limitation in training, where the model easily diverges. Such a notorious training of GANs is well known and has been addressed in numerous studies. Consequently, in order to make the training of GAN stable, numerous regularization methods have been proposed in recent years. This paper reviews the regularization methods that have been recently introduced, most of which have been published in the last three years. Specifically, we focus on general methods that can be commonly used regardless of neural network architectures. To explore the latest research trends in the regularization for GANs, the methods are classified into several groups by their operation principles, and the differences between the methods are analyzed. Furthermore, to provide practical knowledge of using these methods, we investigate popular methods that have been frequently employed in state-of-the-art GANs. In addition, we discuss the limitations in existing methods and propose future research directions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源