论文标题
研究和减轻物理信息神经网络(PINNS)中的失败模式
Investigating and Mitigating Failure Modes in Physics-informed Neural Networks (PINNs)
论文作者
论文摘要
本文探讨了使用物理信息神经网络(PINN)解决部分微分方程(PDE)的困难。 PINNS将物理用作目标函数中的正规化项。但是,这种方法的缺点是对手动超参数调整的要求,在没有验证数据或解决方案的先验知识的情况下使其不切实际。在物理存在的情况下,我们对损失景观和反向传播梯度的调查表明,现有方法会产生难以导航的非凸损失景观。我们的发现表明,高阶PDE污染了反向传播的梯度并阻碍收敛。为了应对这些挑战,我们引入了一种新颖的方法,该方法绕过了高阶导数操作员的计算并减轻反向传播梯度的污染。因此,我们降低了搜索空间的尺寸,并通过可行的非平滑解决方案使学习PDE。我们的方法还提供了一种专注于域复杂区域的机制。此外,我们提出了一种基于Lagrange乘数方法的双重不受约束的公式,以对模型的预测实施平等约束,并以受自适应亚级别方法启发的自适应和独立学习率。我们采用我们的方法来解决各种线性和非线性PDE。
This paper explores the difficulties in solving partial differential equations (PDEs) using physics-informed neural networks (PINNs). PINNs use physics as a regularization term in the objective function. However, a drawback of this approach is the requirement for manual hyperparameter tuning, making it impractical in the absence of validation data or prior knowledge of the solution. Our investigations of the loss landscapes and backpropagated gradients in the presence of physics reveal that existing methods produce non-convex loss landscapes that are hard to navigate. Our findings demonstrate that high-order PDEs contaminate backpropagated gradients and hinder convergence. To address these challenges, we introduce a novel method that bypasses the calculation of high-order derivative operators and mitigates the contamination of backpropagated gradients. Consequently, we reduce the dimension of the search space and make learning PDEs with non-smooth solutions feasible. Our method also provides a mechanism to focus on complex regions of the domain. Besides, we present a dual unconstrained formulation based on Lagrange multiplier method to enforce equality constraints on the model's prediction, with adaptive and independent learning rates inspired by adaptive subgradient methods. We apply our approach to solve various linear and non-linear PDEs.