Publication date: Nov 11, 2020
The use of machine learning to accelerate computer simulations is on the rise. In atomistic simulations, the use of machine learning interatomic potentials (ML-IAPs) can significantly reduce computational costs while maintaining accuracy close to that of ab initio methods. To achieve this, ML-IAPs are trained on large datasets of images, meaning atomistic configurations labeled with data from ab initio calculations. Focusing on carbon, we have created a dataset, CA-9, consisting of 48000 images labeled with energies, forces and stress tensors obtained via ab initio molecular dynamics (AIMD). We use deep learning to train state-of-the-art neural network potentials (NNPs), a form of ML-IAP, on the CA-9 dataset and investigate how training and validation data can affect the performance of the NNPs. Our results show that image generation with AIMD causes a high degree of similarity between the generated images, which has a detrimental effect on the NNPs. However, by carefully choosing which images from the dataset are included in the training and validation data, this effect can be mitigated. We end by benchmarking our trained NNPs in real-world applications and show we can reproduce results from ab initio calculations with an accuracy higher than previously published ML- or classic IAPs.
No Explore or Discover sections associated with this archive record.
|2.4 KiB||Readme file|
|3.4 KiB||Python scripts used to read data from VASP and train neural network potentials|
|453.4 MiB||Datasets for training and testing of neural network potentials|
|13.1 MiB||The best trained neural network potentials for each dataset|