论文标题
通过关系正则化学习自动编码器
Learning Autoencoders with Relational Regularization
论文作者
论文摘要
提出了一个新的算法框架,用于学习数据分布的自动编码器。我们将模型和目标分布之间的差异最小化,并在可学习潜在的先验上使用\ emph {关系正则化}。这种正规化惩罚了潜在的先验与其相应后部之间的融合Gromov-Wasserstein(FGW)距离,从而使人们可以灵活地学习与生成模型相关的结构化先验分布。此外,即使它们具有异质体系结构和无与伦比的潜在空间,它也有助于共同训练多个自动编码器。我们使用两种可扩展算法实现框架,使其适用于概率和确定性自动编码器。我们的关系正规化自动编码器(RAE)的表现优于现有方法,例如$ $,变异自动编码器,Wasserstein AutoCododer及其变体在生成图像上。此外,我们的自动编码器的关系共同训练策略可鼓励综合和现实世界多视图学习任务。该代码位于https://github.com/hongtengxu/ relational-autoencoders。
A new algorithmic framework is proposed for learning autoencoders of data distributions. We minimize the discrepancy between the model and target distributions, with a \emph{relational regularization} on the learnable latent prior. This regularization penalizes the fused Gromov-Wasserstein (FGW) distance between the latent prior and its corresponding posterior, allowing one to flexibly learn a structured prior distribution associated with the generative model. Moreover, it helps co-training of multiple autoencoders even if they have heterogeneous architectures and incomparable latent spaces. We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders. Our relational regularized autoencoder (RAE) outperforms existing methods, $e.g.$, the variational autoencoder, Wasserstein autoencoder, and their variants, on generating images. Additionally, our relational co-training strategy for autoencoders achieves encouraging results in both synthesis and real-world multi-view learning tasks. The code is at https://github.com/HongtengXu/ Relational-AutoEncoders.