论文标题

通用Stokes方程的深度学习galerkin方法

The Deep Learning Galerkin Method for the General Stokes Equations

论文作者

Li, Jian, Yue, Jing, Zhang, Wen, Duan, Wansuo

论文摘要

有限元方法,有限差异方法,有限体积方法和光谱方法在解决部分微分方程方面取得了巨大成功。但是,传统数值方法的高精度是以高效率为代价。尤其是面对高维问题,传统的数值方法通常在高维网格的细分以及高阶项的可不同性和集成性中不可行。在深度学习中,神经网络可以通过添加层数或扩大神经元数来解决高维问题。与传统的数值方法相比,它具有很大的优势。在本文中,我们考虑了Deep Galerkin方法(DGM),用于通过使用深神经网络而无需产生网格网格来解决一般的Stokes方程。 DGM可以降低计算复杂性并实现竞争结果。在这里,根据L2误差,我们构建了目标函数以控制近似解决方案的性能。然后,我们证明了目标函数的收敛性以及神经网络与精确解决方案的收敛性。最后,通过一些数值实验证明了所提出的框架的有效性。

The finite element method, finite difference method, finite volume method and spectral method have achieved great success in solving partial differential equations. However, the high accuracy of traditional numerical methods is at the cost of high efficiency. Especially in the face of high-dimensional problems, the traditional numerical methods are often not feasible in the subdivision of high-dimensional meshes and the differentiability and integrability of high-order terms. In deep learning, neural network can deal with high-dimensional problems by adding the number of layers or expanding the number of neurons. Compared with traditional numerical methods, it has great advantages. In this article, we consider the Deep Galerkin Method (DGM) for solving the general Stokes equations by using deep neural network without generating mesh grid. The DGM can reduce the computational complexity and achieve the competitive results. Here, depending on the L2 error we construct the objective function to control the performance of the approximation solution. Then, we prove the convergence of the objective function and the convergence of the neural network to the exact solution. Finally, the effectiveness of the proposed framework is demonstrated through some numerical experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源