论文标题
LookAhead优化器改善了自然图像重建的卷积自动编码器的性能
Lookahead optimizer improves the performance of Convolutional Autoencoders for reconstruction of natural images
论文作者
论文摘要
自动编码器是一类人工神经网络,在最近的过去引起了很多关注。使用自动编码器的编码器块可以将输入图像压缩为有意义的表示形式。然后,使用解码器将压缩表示形式重建回到看起来像输入图像的版本。它在数据压缩和降解领域中具有大量应用。存在另一个版本的自动编码器(AE),称为变性AE(VAE),它充当像GAN这样的生成模型。最近,引入了一个优化器,该优化器被称为LookAhead Optimizer,可显着增强亚当和SGD的性能。在本文中,我们用LookAhead Optimizer(与Adam)实施了卷积自动编码器(CAE)和卷积变化自动编码器(CVAE),并将它们与ADAM(唯一的)优化器对应物进行比较。为此,我们使用了一个电影数据集,该数据集由以前的情况包含自然图像,而后一种情况则包括CIFAR100。我们表明,LookAhead Optimizer(与Adam)改善了自然图像重建的CAES性能。
Autoencoders are a class of artificial neural networks which have gained a lot of attention in the recent past. Using the encoder block of an autoencoder the input image can be compressed into a meaningful representation. Then a decoder is employed to reconstruct the compressed representation back to a version which looks like the input image. It has plenty of applications in the field of data compression and denoising. Another version of Autoencoders (AE) exist, called Variational AE (VAE) which acts as a generative model like GAN. Recently, an optimizer was introduced which is known as lookahead optimizer which significantly enhances the performances of Adam as well as SGD. In this paper, we implement Convolutional Autoencoders (CAE) and Convolutional Variational Autoencoders (CVAE) with lookahead optimizer (with Adam) and compare them with the Adam (only) optimizer counterparts. For this purpose, we have used a movie dataset comprising of natural images for the former case and CIFAR100 for the latter case. We show that lookahead optimizer (with Adam) improves the performance of CAEs for reconstruction of natural images.