Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI4MAT-ICLR-2025: AI for Accelerated Materials Design

Evaluating Machine Learning Potentials on Bulk Structures with Neutral Substitutional Defects

Xiaoxiao Wang · Suehyun Park · Kin Long Kelvin Lee · Rachel Kurchin · Santiago Miret

Keywords: [ Finetuning ] [ Defects ] [ Perovskites ] [ MLIP ]


Abstract:

Substitutional defects, either intentionally introduced as dopants or unintentionally as contaminants, are often primary determiners of the performance, efficiency, and versatility of semiconductors. However, the high computational cost of density functional theory (DFT) calculations limits the efficiency of large-scale screening. Machine learning interatomic potentials (MLIP) offer a promising alternative, as they can achieve high accuracy when trained on computational datasets, while being significantly faster than DFT calculations. In this work, we assess the generalization of the MACE-MP potential on a newly developed dataset (Perovs-Dopants) for perovskites with neutral substitutional defects. To disentangle the impact of computational settings from compositional novelty, we performed single-point DFT calculations on a subset of the MPtrj dataset using CP2K, and analyzed the distribution of force discrepancies between different DFT softwares. Our results indicate that both differences in DFT settings and out-of-distribution chemical compositions contribute to the prediction error when using MACE-MP. We then systematically compared standard finetuning and multihead finetuning approaches, demonstrating that multihead finetuning better preserves knowledge from the original training dataset while adapting to the new defect dataset.

Chat is not available.