This record has versions v1, v2. This is version v1.
×

Recommended by

Indexed by

Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles

Aik Rui Tan1*, Shingo Urata2*, Samuel Goldman3*, Johannes C. B. Dietschreit1*, Rafael Gómez-Bombarelli1*

1 Department of Materials Science and Engineering, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America

2 Innovative Technology Laboratories, AGC Inc., Yokohama, Japan

3 Computational and Systems Biology, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America

* Corresponding authors emails: atan14@mit.edu, shingo.urata@agc.com, samlg@mit.edu, jdiet@mit.edu, rafagb@mit.edu
DOI10.24435/materialscloud:55-sd [version v1]

Publication date: May 04, 2023

How to cite this record

Aik Rui Tan, Shingo Urata, Samuel Goldman, Johannes C. B. Dietschreit, Rafael Gómez-Bombarelli, Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles, Materials Cloud Archive 2023.73 (2023), https://doi.org/10.24435/materialscloud:55-sd

Description

Neural networks (NNs) often assign high confidence to their predictions, even for points far out-of-distribution, making uncertainty quantification (UQ) a challenge. When they are employed to model interatomic potentials in materials systems, this problem leads to unphysical structures that disrupt simulations, or to biased statistics and dynamics that do not reflect the true physics. Differentiable UQ techniques can find new informative data and drive active learning loops for robust potentials. However, a variety of UQ techniques, including newly developed ones, exist for atomistic simulations and there are no clear guidelines for which are most effective or suitable for a given case. In this work, we examine multiple UQ schemes for improving the robustness of NN interatomic potentials (NNIPs) through active learning. In particular, we compare incumbent ensemble-based methods against strategies that use single, deterministic NNs: mean-variance estimation, deep evidential regression, and Gaussian mixture models. We explore three datasets ranging from in-domain interpolative learning to more extrapolative out-of-domain generalization challenges: rMD17, ammonia inversion, and bulk silica glass. Performance is measured across multiple metrics relating model error to uncertainty. Our experiments show that none of the methods consistently outperformed each other across the various metrics. Ensembling remained better at generalization and for NNIP robustness; MVE only proved effective for in-domain interpolation, while GMM was better out-of-domain; and evidential regression, despite its promise, was not the preferable alternative in any of the cases. More broadly, cost-effective, single deterministic models cannot yet consistently match or outperform ensembling for uncertainty quantification in NNIPs.

Materials Cloud sections using this data

No Explore or Discover sections associated with this archive record.

Files

File name Size Description
silica_train.xyz
MD5md5:d1e53f5238e084691cec2f23d485f154
111.6 MiB Training dataset for silica
silica_test.xyz
MD5md5:3996912931795a2a3245e2cbb4e4cade
28.2 MiB Testing dataset for silica
ammonia_train.xyz
MD5md5:4e24ab2ab0d26246a5fff157182d76e1
37.6 KiB Training dataset for ammonia
ammonia_test.xyz
MD5md5:7bb78eefb69c724dc7276f916c1eb70c
95.7 KiB Testing dataset for ammonia
README.md
MD5md5:599dd5689f3b96ac856db3120bf33dae
2.1 KiB Description of the files and units

License

Files and data are licensed under the terms of the following license: Creative Commons Attribution 4.0 International.
Metadata, except for email addresses, are licensed under the Creative Commons Attribution Share-Alike 4.0 International license.

Keywords

Uncertainty quantification neural network interatomic potentials single deterministic neural networks adversarial sampling silica glass ammonia

Version history:

2023.179 (version v2) Nov 21, 2023 DOI10.24435/materialscloud:mv-a3
2023.73 (version v1) [This version] May 04, 2023 DOI10.24435/materialscloud:55-sd