论文标题

通过理论保证学习隐性生成模型

Learning Implicit Generative Models with Theoretical Guarantees

论文作者

Gao, Yuan, Huang, Jian, Jiao, Yuling, Liu, Jin

论文摘要

我们提出了\ textbf {uni} \ textbf {f} \ textbf {i} mplicit \ textbf {ge} nerative \ textbf {m} odeling(m} odeling(unifigem)通过从最佳运输,数字 - 数字 - dose dose dose dosity-deble的方法集成理论的方法,网络。首先,隐式生成学习的问题被提出为找到参考分布和目标分布之间的最佳传输图的问题,该图的特征是完全非线性的monge-ampère方程。从测量空间中的梯度流的角度来解释Monge-Ampère方程的无限线性化导致连续性方程或McKean-Vlasov方程。然后,我们使用正向Euler迭代在数值上求解McKean-Vlasov方程,在该迭代中,正向Euler图取决于当前迭代和基础目标分布之间的分布之间的密度比(密度差)。我们通过深度比率(密度差异)拟合进一步估计密度比(密度差),并在估计误差上得出明确的上限。合成数据集和实际基准数据集的实验结果支持我们的理论发现,并证明了Unifigem的有效性。

We propose a \textbf{uni}fied \textbf{f}ramework for \textbf{i}mplicit \textbf{ge}nerative \textbf{m}odeling (UnifiGem) with theoretical guarantees by integrating approaches from optimal transport, numerical ODE, density-ratio (density-difference) estimation and deep neural networks. First, the problem of implicit generative learning is formulated as that of finding the optimal transport map between the reference distribution and the target distribution, which is characterized by a totally nonlinear Monge-Ampère equation. Interpreting the infinitesimal linearization of the Monge-Ampère equation from the perspective of gradient flows in measure spaces leads to the continuity equation or the McKean-Vlasov equation. We then solve the McKean-Vlasov equation numerically using the forward Euler iteration, where the forward Euler map depends on the density ratio (density difference) between the distribution at current iteration and the underlying target distribution. We further estimate the density ratio (density difference) via deep density-ratio (density-difference) fitting and derive explicit upper bounds on the estimation error. Experimental results on both synthetic datasets and real benchmark datasets support our theoretical findings and demonstrate the effectiveness of UnifiGem.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源