论文标题

合成数据可以在原子机学习中实验

Synthetic data enable experiments in atomistic machine learning

论文作者

Gardner, John L. A., Beaulieu, Zoé Faure, Deringer, Volker L.

论文摘要

机器学习模型越来越多地用于预测化学系统中原子的性质。在为此任务开发描述符和回归框架方面,已经取得了重大进展,通常从(相对)的小型机械参考数据集开始。此类较大的数据集已变得可用,但生成昂贵。在这里,我们证明了使用大型数据集的使用,该数据集已“合成”,并用现有ML潜在模型的每个原子能标记。与量子力学基础真相相比,此过程的便宜度使我们能够产生数百万个数据点,从而使原子量的ML模型从小型数据到大数据进行了快速实验。这种方法使我们在这里可以深入比较回归框架,并基于学习的表示探索可视化。我们还表明,学习合成数据标签可能是在小型数据集上随后进行微调的有用的预训练任务。将来,我们期望我们的开源数据集和类似数据集将在迅速探索丰富化学数据限制的深度学习模型中很有用。

Machine-learning models are increasingly used to predict properties of atoms in chemical systems. There have been major advances in developing descriptors and regression frameworks for this task, typically starting from (relatively) small sets of quantum-mechanical reference data. Larger datasets of this kind are becoming available, but remain expensive to generate. Here we demonstrate the use of a large dataset that we have "synthetically" labelled with per-atom energies from an existing ML potential model. The cheapness of this process, compared to the quantum-mechanical ground truth, allows us to generate millions of datapoints, in turn enabling rapid experimentation with atomistic ML models from the small- to the large-data regime. This approach allows us here to compare regression frameworks in depth, and to explore visualisation based on learned representations. We also show that learning synthetic data labels can be a useful pre-training task for subsequent fine-tuning on small datasets. In the future, we expect that our open-sourced dataset, and similar ones, will be useful in rapidly exploring deep-learning models in the limit of abundant chemical data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源