论文标题

醒来的学习摊销

Amortised Learning by Wake-Sleep

论文作者

Wenliang, Li K., Moskovitz, Theodore, Kanagawa, Heishiro, Sahani, Maneesh

论文摘要

采用潜在变量捕获观察到的数据的模型是许多当前无监督的学习算法的核心,但是对于强大而柔性的潜在可变性模型,几乎总是很棘手。因此,最先进的方法要么完全放弃最大样品框架,要么依靠多种变异的近似值对后期分布。在这里,我们提出了一种我们称为摊销学习的替代方法。我们没有使用尾流蒙特卡洛策略来计算与潜在的后端的近似值,以学习直接估计最大样品型参数更新的函数。每当潜伏的样本中可以从生成模型中模拟,将模型视为“黑匣子”时,就可以进行摊销学习。我们在广泛的复杂模型上展示了它的有效性,包括那些在非欧盟境空间上具有离散或支持的潜伏期的模型。

Models that employ latent variables to capture structure in observed data lie at the heart of many current unsupervised learning algorithms, but exact maximum-likelihood learning for powerful and flexible latent-variable models is almost always intractable. Thus, state-of-the-art approaches either abandon the maximum-likelihood framework entirely, or else rely on a variety of variational approximations to the posterior distribution over the latents. Here, we propose an alternative approach that we call amortised learning. Rather than computing an approximation to the posterior over latents, we use a wake-sleep Monte-Carlo strategy to learn a function that directly estimates the maximum-likelihood parameter updates. Amortised learning is possible whenever samples of latents and observations can be simulated from the generative model, treating the model as a "black box". We demonstrate its effectiveness on a wide range of complex models, including those with latents that are discrete or supported on non-Euclidean spaces.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源