Recommended by

Indexed by

Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles

Aik Rui Tan1*, Shingo Urata2*, Samuel Goldman3*, Johannes C. B. Dietschreit1*, Rafael Gómez-Bombarelli1*

1 Department of Materials Science and Engineering, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America

2 Innovative Technology Laboratories, AGC Inc., Yokohama, Japan

3 Computational and Systems Biology, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America

* Corresponding authors emails: atan14@mit.edu, shingo.urata@agc.com, samlg@mit.edu, jdiet@mit.edu, rafagb@mit.edu
DOI10.24435/materialscloud:mv-a3 [version v2]

Publication date: Nov 21, 2023

How to cite this record

Aik Rui Tan, Shingo Urata, Samuel Goldman, Johannes C. B. Dietschreit, Rafael Gómez-Bombarelli, Single-model uncertainty quantification in neural network potentials does not consistently outperform model ensembles, Materials Cloud Archive 2023.179 (2023), https://doi.org/10.24435/materialscloud:mv-a3


Neural networks (NNs) often assign high confidence to their predictions, even for points far out-of-distribution, making uncertainty quantification (UQ) a challenge. When they are employed to model interatomic potentials in materials systems, this problem leads to unphysical structures that disrupt simulations, or to biased statistics and dynamics that do not reflect the true physics. Differentiable UQ techniques can find new informative data and drive active learning loops for robust potentials. However, a variety of UQ techniques, including newly developed ones, exist for atomistic simulations and there are no clear guidelines for which are most effective or suitable for a given case. In this work, we examine multiple UQ schemes for improving the robustness of NN interatomic potentials (NNIPs) through active learning. In particular, we compare incumbent ensemble-based methods against strategies that use single, deterministic NNs: mean-variance estimation, deep evidential regression, and Gaussian mixture models. We explore three datasets ranging from in-domain interpolative learning to more extrapolative out-of-domain generalization challenges: rMD17, ammonia inversion, and bulk silica glass. Performance is measured across multiple metrics relating model error to uncertainty. Our experiments show that none of the methods consistently outperformed each other across the various metrics. Ensembling remained better at generalization and for NNIP robustness; MVE only proved effective for in-domain interpolation, while GMM was better out-of-domain; and evidential regression, despite its promise, was not the preferable alternative in any of the cases. More broadly, cost-effective, single deterministic models cannot yet consistently match or outperform ensembling for uncertainty quantification in NNIPs.

Materials Cloud sections using this data

No Explore or Discover sections associated with this archive record.


File name Size Description
92.5 MiB Training + validation dataset for silica glass
26.2 MiB Testing dataset for silica glass
38.8 KiB Training + validation dataset for ammonia
64.5 KiB Testing dataset for ammonia
4.4 KiB Description of the files and units


Files and data are licensed under the terms of the following license: Creative Commons Attribution 4.0 International.
Metadata, except for email addresses, are licensed under the Creative Commons Attribution Share-Alike 4.0 International license.


Uncertainty quantification neural network interatomic potentials single deterministic neural networks adversarial sampling silica glass ammonia

Version history:

2023.179 (version v2) [This version] Nov 21, 2023 DOI10.24435/materialscloud:mv-a3
2023.73 (version v1) May 04, 2023 DOI10.24435/materialscloud:55-sd