论文标题

通过轨迹多项式正规化加速连续归一流的流动

Accelerating Continuous Normalizing Flow with Trajectory Polynomial Regularization

论文作者

Huang, Han-Hsien, Yeh, Mi-Yen

论文摘要

在本文中,我们提出了一种有效加速连续归一流(CNF)计算的方法,该方法已被证明是诸如变异推理和密度估计等任务的强大工具。 CNF的训练时间成本可能非常高,因为解决相应的普通微分方程(ODE)所需的功能评估数量(NFE)非常大。我们认为,高NFE是由于解决ODE的截断误差而导致的。为了解决这个问题,我们建议添加正规化。正则化惩罚了ODE轨迹与其拟合多项式回归之间的差异。 ODE的轨迹将近似多项式函数,因此截断误差将较小。此外,我们提供了两个证据,并声称额外的正规化不会损害培训质量。实验结果表明,我们提出的方法可以在密度估计的任务下降低NFE的42.3%至71.3%,而变异自动编码器的NFE降低了19.3%至32.1%,而测试损失不受影响。

In this paper, we propose an approach to effectively accelerating the computation of continuous normalizing flow (CNF), which has been proven to be a powerful tool for the tasks such as variational inference and density estimation. The training time cost of CNF can be extremely high because the required number of function evaluations (NFE) for solving corresponding ordinary differential equations (ODE) is very large. We think that the high NFE results from large truncation errors of solving ODEs. To address the problem, we propose to add a regularization. The regularization penalizes the difference between the trajectory of the ODE and its fitted polynomial regression. The trajectory of ODE will approximate a polynomial function, and thus the truncation error will be smaller. Furthermore, we provide two proofs and claim that the additional regularization does not harm training quality. Experimental results show that our proposed method can result in 42.3% to 71.3% reduction of NFE on the task of density estimation, and 19.3% to 32.1% reduction of NFE on variational auto-encoder, while the testing losses are not affected.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源