论文标题
朝向神经稀疏线性求解器
Towards Neural Sparse Linear Solvers
论文作者
论文摘要
大型稀疏的对称线性系统出现在科学和工程的几个分支中,这要归功于有限元方法(FEM)的广泛使用。最快的稀疏线性求解器可用实现混合迭代方法。这些方法基于启发式算法,以置入行和列或查找预处理矩阵。此外,它们本质上是顺序的,使它们无法完全利用GPU处理能力。我们提出了神经稀疏线性求解器,这是一个深度学习框架,用于学习稀疏对称线性系统的近似求解器。我们的方法依赖于将稀疏的对称线性系统表示为无方向的加权图。这种图形表示本质上是排列量表和比例不变的,它可以成为经过回归解决方案的图形神经网络的输入。我们在结构工程的静态线性分析问题上测试神经稀疏线性求解器。我们的方法不如经典算法准确,但它是无关的,在GPU上很快,并且适用于通用稀疏的对称系统,而没有任何其他假设。尽管仍然存在许多局限性,但这项研究显示了一种通用方法来解决涉及使用图神经网络稀疏对称矩阵的问题。
Large sparse symmetric linear systems appear in several branches of science and engineering thanks to the widespread use of the finite element method (FEM). The fastest sparse linear solvers available implement hybrid iterative methods. These methods are based on heuristic algorithms to permute rows and columns or find a preconditioner matrix. In addition, they are inherently sequential, making them unable to leverage the GPU processing power entirely. We propose neural sparse linear solvers, a deep learning framework to learn approximate solvers for sparse symmetric linear systems. Our method relies on representing a sparse symmetric linear system as an undirected weighted graph. Such graph representation is inherently permutation-equivariant and scale-invariant, and it can become the input to a graph neural network trained to regress the solution. We test neural sparse linear solvers on static linear analysis problems from structural engineering. Our method is less accurate than classic algorithms, but it is hardware-independent, fast on GPUs, and applicable to generic sparse symmetric systems without any additional hypothesis. Although many limitations remain, this study shows a general approach to tackle problems involving sparse symmetric matrices using graph neural networks.