论文标题

$ l^2 $物理信息损失总是适合培训物理知识的神经网络吗?

Is $L^2$ Physics-Informed Loss Always Suitable for Training Physics-Informed Neural Network?

论文作者

Wang, Chuwei, Li, Shanda, He, Di, Wang, Liwei

论文摘要

物理知识的神经网络(PINN)方法是一种使用深度学习来求解部分微分方程的新方法。 $ l^2 $物理信息损失是培训物理信息神经网络的事实上的标准。在本文中,我们通过研究损失函数与学习解决方案的近似质量之间的关系来挑战这种常见实践。特别是,我们利用部分微分方程的文献中的稳定性概念来研究学习溶液的渐近行为,因为损失接近零。通过这个概念,我们研究了一类重要的高维非线性PDE,以最佳控制,汉密尔顿 - 雅各比 - 贝尔曼(HJB)方程式,并证明对于一般$ l^p $物理学的损失,只有$ p $足够大,只有一类广泛的HJB方程是稳定的。因此,常用的$ l^2 $损失不适合在这些方程式上训练Pinn,而$ l^{\ infty} $损失是更好的选择。基于理论见解,我们开发了一种新颖的Pinn训练算法,以最大程度地减少HJB方程的$ l^{\ infty} $损失,这与对抗性训练具有类似的精神。通过实验证明了所提出算法的有效性。我们的代码在https://github.com/lithiumda/l_inf-pinn上发布。

The Physics-Informed Neural Network (PINN) approach is a new and promising way to solve partial differential equations using deep learning. The $L^2$ Physics-Informed Loss is the de-facto standard in training Physics-Informed Neural Networks. In this paper, we challenge this common practice by investigating the relationship between the loss function and the approximation quality of the learned solution. In particular, we leverage the concept of stability in the literature of partial differential equation to study the asymptotic behavior of the learned solution as the loss approaches zero. With this concept, we study an important class of high-dimensional non-linear PDEs in optimal control, the Hamilton-Jacobi-Bellman(HJB) Equation, and prove that for general $L^p$ Physics-Informed Loss, a wide class of HJB equation is stable only if $p$ is sufficiently large. Therefore, the commonly used $L^2$ loss is not suitable for training PINN on those equations, while $L^{\infty}$ loss is a better choice. Based on the theoretical insight, we develop a novel PINN training algorithm to minimize the $L^{\infty}$ loss for HJB equations which is in a similar spirit to adversarial training. The effectiveness of the proposed algorithm is empirically demonstrated through experiments. Our code is released at https://github.com/LithiumDA/L_inf-PINN.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源