论文标题
用于训练机器学习的原子质潜力的基准测试结构演化方法
Benchmarking Structural Evolution Methods for Training of Machine Learned Interatomic Potentials
论文作者
论文摘要
在为机器学习的原子间电位(MLIP)创建训练数据时,通常是创建初始结构并使用分子动力学来进化它们以品尝较大的配置空间。我们基准了另外两种不断发展的结构的方式,轮廓探索和二聚体方法搜索,以针对分子动力学进行分子动力学,以生成MLIP的多种且强大的训练密度功能理论数据集。我们还讨论了来自已知结构或随机结构的初始结构的产生,以进一步正式化结构化过程。富含多晶型锆 - 氧组成空间被用作严格的基准系统,用于比较对这些结构进化方法产生的结构训练的MLIP的性能。使用Behler-Parrinello神经网络作为机器学习的原子间潜在模型,我们发现在空间描述仪的多样性和统计准确性方面,轮廓探索和二聚体方法搜索通常优于分子动力学。
When creating training data for machine-learned interatomic potentials (MLIPs), it is common to create initial structures and evolve them using molecular dynamics to sample a larger configuration space. We benchmark two other modalities of evolving structures, contour exploration and dimer-method searches against molecular dynamics for their ability to produce diverse and robust training density functional theory data sets for MLIPs. We also discuss the generation of initial structures which are either from known structures or from random structures in detail to further formalize the structure-sourcing processes in the future. The polymorph-rich zirconium-oxygen composition space is used as a rigorous benchmark system for comparing the performance of MLIPs trained on structures generated from these structural evolution methods. Using Behler-Parrinello neural networks as our machine-learned interatomic potential models, we find that contour exploration and the dimer-method searches are generally superior to molecular dynamics in terms of spatial descriptor diversity and statistical accuracy.