Crystal-graph attention networks for the prediction of stable materials


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Schmidt, Jonathan</dc:creator>
  <dc:creator>Pettersson, Love</dc:creator>
  <dc:creator>Verdozzi, Claudio</dc:creator>
  <dc:creator>Botti, Silvana</dc:creator>
  <dc:creator>Marques, Miguel</dc:creator>
  <dc:date>2021-12-16</dc:date>
  <dc:description>Graph neural networks have enjoyed great success in the prediction of material properties for both molecules and crystals. These networks typically use the atomic positions (usually expanded in a Gaussian basis) and the atomic species as input. Unfortunately, this information is in general not available when predicting new materials, for which the precise geometrical information is unknown. In this work, we circumvent this problem by predicting the thermodynamic stability of crystal structures without using the knowledge of the precise bond distances. We replace this information with embeddings of graph distances, allowing our networks to be used directly in high-throughput studies based on both composition and crystal structure prototype. Using these embeddings, we combine the newest developments in graph neural networks and apply them to the prediction of the distances to the convex hull. To train these networks, we curate a dataset of over 2 million density-functional calculations of crystals with consistent calculation parameters from various sources. The new dataset allows for the creation of a high quality convex hull and a large scale transfer learning approach. We apply the resulting model to the high-throughput search of 15 million tetragonal perovskites of composition ABCD2. As a result, we identify several thousand potentially stable compounds and demonstrate that transfer learning from the newly curated dataset reduces the required training data by 50%.</dc:description>
  <dc:identifier>https://archive.materialscloud.org/record/2021.222</dc:identifier>
  <dc:identifier>doi:10.24435/materialscloud:j9-bf</dc:identifier>
  <dc:identifier>mcid:2021.222</dc:identifier>
  <dc:identifier>oai:materialscloud.org:1182</dc:identifier>
  <dc:language>en</dc:language>
  <dc:publisher>Materials Cloud</dc:publisher>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>Creative Commons Attribution 4.0 International https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:subject>density-functional theory</dc:subject>
  <dc:subject>high-throughput</dc:subject>
  <dc:subject>PBE</dc:subject>
  <dc:title>Crystal-graph attention networks for the prediction of stable materials</dc:title>
  <dc:type>Dataset</dc:type>
</oai_dc:dc>