论文标题
使用深神经网络解决高维征值问题:扩散的蒙特卡洛类似方法
Solving high-dimensional eigenvalue problems using deep neural networks: A diffusion Monte Carlo like approach
论文作者
论文摘要
我们提出了一种新方法,以根据深层神经网络的高维度解决线性和半线性二阶差异操作员的特征值问题。特征值问题被重新列为由操作员引起的半群流量的固定点问题,其解决方案可以由Feynman-kac公式以前向后的随机微分方程表示。该方法与扩散的蒙特卡洛具有类似的精神,但通过神经网络ANSATZ增强了与本征功能的直接近似。固定点的标准为通过优化搜索参数提供了自然损耗函数。我们的方法能够在几个数值示例中提供精确的特征值和本征函数近似值,包括Fokker-Planck运算符以及高维度的线性和非线性Schrödinger运算符。
We propose a new method to solve eigenvalue problems for linear and semilinear second order differential operators in high dimensions based on deep neural networks. The eigenvalue problem is reformulated as a fixed point problem of the semigroup flow induced by the operator, whose solution can be represented by Feynman-Kac formula in terms of forward-backward stochastic differential equations. The method shares a similar spirit with diffusion Monte Carlo but augments a direct approximation to the eigenfunction through neural-network ansatz. The criterion of fixed point provides a natural loss function to search for parameters via optimization. Our approach is able to provide accurate eigenvalue and eigenfunction approximations in several numerical examples, including Fokker-Planck operator and the linear and nonlinear Schrödinger operators in high dimensions.