论文标题
神经隐式流形学习用于拓扑感知密度估计
Neural Implicit Manifold Learning for Topology-Aware Density Estimation
论文作者
论文摘要
在$ \ mathbb {r}^n $中观察到的自然数据通常被限制为$ m $ dimensional歧管$ \ mathcal {m} $,其中$ m <n $。这项工作着重于为此类数据构建理论上有原则的生成模型的任务。当前的生成模型通过通过神经网络映射$ M $二维潜在变量来学习$ \ MATHCAL {M} $。这些过程(我们称为Pushforward模型)会产生一个直接的限制:一般不能用单个参数化表示歧管,这意味着尝试这样做的方法将导致计算不稳定性或无法在歧管内学习概率密度。为了解决这个问题,我们建议将$ \ Mathcal {m} $建模为神经隐式歧管:神经网络的零零。然后,我们通过基于限制的能量模型来学习$ \ Mathcal {M} $内的概率密度,该模型采用了Langevin Dynamics的约束变体来训练和从学习的歧管中进行训练和采样。在关于合成和自然数据的实验中,我们表明我们的模型可以比推送模型更准确地学习具有复杂拓扑的歧管支配分布。
Natural data observed in $\mathbb{R}^n$ is often constrained to an $m$-dimensional manifold $\mathcal{M}$, where $m < n$. This work focuses on the task of building theoretically principled generative models for such data. Current generative models learn $\mathcal{M}$ by mapping an $m$-dimensional latent variable through a neural network $f_θ: \mathbb{R}^m \to \mathbb{R}^n$. These procedures, which we call pushforward models, incur a straightforward limitation: manifolds cannot in general be represented with a single parameterization, meaning that attempts to do so will incur either computational instability or the inability to learn probability densities within the manifold. To remedy this problem, we propose to model $\mathcal{M}$ as a neural implicit manifold: the set of zeros of a neural network. We then learn the probability density within $\mathcal{M}$ with a constrained energy-based model, which employs a constrained variant of Langevin dynamics to train and sample from the learned manifold. In experiments on synthetic and natural data, we show that our model can learn manifold-supported distributions with complex topologies more accurately than pushforward models.