论文标题
无需培训电子结构的神经网络的无培训高参数优化
Training-free hyperparameter optimization of neural networks for electronic structures in matter
论文作者
论文摘要
材料科学和化学中无数现象依赖于物质中电子结构的量子级模拟。几十年来,虽然升至更大的长度和时间尺度一直是一个紧迫的问题,但尽管现代软件方法和高性能计算的进步,这种大规模的电子结构计算仍然具有挑战性。在这方面的一线希望是使用机器学习来加速电子结构计算 - 这一研究最近引起了人们的关注。其中的巨大挑战是在称为超参数优化的过程中找到合适的机器学习模型。但是,除了数据生成之外,这还导致大量的计算开销。我们通过在高参数优化阶段绕过过度训练,通过大约两个数量级来加速神经网络模型的构建。我们展示了我们的工作流程,用于Kohn-Sham密度功能理论,这是材料科学和化学中最流行的计算方法。
A myriad of phenomena in materials science and chemistry rely on quantum-level simulations of the electronic structure in matter. While moving to larger length and time scales has been a pressing issue for decades, such large-scale electronic structure calculations are still challenging despite modern software approaches and advances in high-performance computing. The silver lining in this regard is the use of machine learning to accelerate electronic structure calculations -- this line of research has recently gained growing attention. The grand challenge therein is finding a suitable machine-learning model during a process called hyperparameter optimization. This, however, causes a massive computational overhead in addition to that of data generation. We accelerate the construction of neural network models by roughly two orders of magnitude by circumventing excessive training during the hyperparameter optimization phase. We demonstrate our workflow for Kohn-Sham density functional theory, the most popular computational method in materials science and chemistry.