On double-descent in uncertainty quantification in overparametrized models (code)
JSON Export
{
"revision": 6,
"id": "1890",
"created": "2023-09-07T16:39:52.210946+00:00",
"metadata": {
"doi": "10.24435/materialscloud:zb-71",
"status": "published",
"title": "On double-descent in uncertainty quantification in overparametrized models (code)",
"mcid": "2023.145",
"license_addendum": null,
"_files": [
{
"description": "compressed files contained in the repository https://github.com/SPOC-group/double_descent_uncertainty",
"key": "double_descent_uncertainty-main.zip",
"size": 532532,
"checksum": "md5:eb0de263154e9b45d2248211414d9815"
},
{
"description": "README file describing the structure of the code",
"key": "README.txt",
"size": 1255,
"checksum": "md5:b51340aa9b662d3826492211878a02f2"
}
],
"owner": 1130,
"_oai": {
"id": "oai:materialscloud.org:1890"
},
"keywords": [
"MARVEL/P2",
"uncertainty quantification",
"neural networks",
"numerical simulation"
],
"conceptrecid": "1889",
"is_last": true,
"references": [
{
"type": "Journal reference",
"doi": "https://doi.org/10.48550/arXiv.2210.12760",
"url": "https://arxiv.org/abs/2210.12760",
"citation": "L. Clarte, B. Loureiro, F. Krzakala, L. Zdeborova, Proceedings of Machine Learning Research 206, 7089-7125 (2023)"
}
],
"publication_date": "Sep 19, 2023, 16:36:28",
"license": "Creative Commons Attribution 4.0 International",
"id": "1890",
"description": "Uncertainty quantification is a central challenge in reliable and trustworthy machine learning. Naive measures such as last-layer scores are well-known to yield overconfident estimates in the context of overparametrized neural networks. Several methods, ranging from temperature scaling to different Bayesian treatments of neural networks, have been proposed to mitigate overconfidence, most often supported by the numerical observation that they yield better calibrated uncertainty measures. In this work, we provide a sharp comparison between popular uncertainty measures for binary classification in a mathematically tractable model for overparametrized neural networks: the random features model. We discuss a trade-off between classification accuracy and calibration, unveiling a double descent like behavior in the calibration curve of optimally regularized estimators as a function of overparametrization. This is in contrast with the empirical Bayes method, which we show to be well calibrated in our setting despite the higher generalization error and overparametrization.\nThis record provides the code to reproduce the numerical experiments of the related paper \"On double-descent in uncertainty quantification in overparametrized models\".",
"version": 1,
"contributors": [
{
"email": "lucas.clarte@epfl.ch",
"affiliations": [
"\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL), Statistical Physics of Computation lab., CH-1015 Lausanne, Switzerland"
],
"familyname": "Clarte",
"givennames": "Lucas"
},
{
"affiliations": [
"D\u00e9partement d\u2019Informatique, \u00c9cole Normale Sup\u00e9rieure - PSL & CNRS, Paris, France"
],
"familyname": "Loureiro",
"givennames": "Bruno"
},
{
"affiliations": [
"\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL), Information, Learning and Physics lab., CH-1015 Lausanne, Switzerland"
],
"familyname": "Krzakala",
"givennames": "Florent"
},
{
"affiliations": [
"\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL), Statistical Physics of Computation lab., CH-1015 Lausanne, Switzerland"
],
"familyname": "Zdeborova",
"givennames": "Lenka"
}
],
"edited_by": 576
},
"updated": "2023-09-19T14:36:28.934393+00:00"
}