Published October 17, 2024 | Version v1
Dataset Open

A prediction rigidity formalism for low-cost uncertainties in trained neural networks

  • 1. COSMO, Institut des Matériaux, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Vaud, Switzerland

* Contact person

Description

Quantifying the uncertainty of regression models is essential to ensure their reliability, particularly since their application often extends beyond their training domain. Based on the solution of a constrained optimization problem, this work proposes 'prediction rigidities' as a formalism to obtain uncertainties of arbitrary pre-trained regressors. A clear connection between the suggested framework and Bayesian inference is established, and a last-layer approximation is developed and rigorously justified to enable the application of the method to neural networks. This extension affords cheap uncertainties without any modification to the neural network itself or its training procedure. The effectiveness of this approach is shown for a wide range of regression tasks, ranging from simple toy models to applications in chemistry and meteorology. This record includes computational experiments supporting the MLST paper titled "A prediction rigidity formalism for low-cost uncertainties in trained neural networks".

Files

File preview

files_description.md

All files

Files (499.0 MiB)

Name Size
md5:dd2691c8cfa1cdf72f79df367e0bf8d2
332 Bytes Preview Download
md5:70c0fb1fb3bce6f9147a65890f27ec7f
499.0 MiB Preview Download

References

Preprint (Paper in which the method is described)
F. Bigi, S. Chong, M.Ceriotti, F. Grasselli, https://arxiv.org/abs/2403.02251, doi: 10.1088/2632-2153/ad805f

Preprint (Paper in which the method is described)
F. Bigi, S. Chong, M.Ceriotti, F. Grasselli, https://arxiv.org/abs/2403.02251