Skip to yearly menu bar Skip to main content


Keynote Presentation
in
Workshop: AI for Earth and Space Science

Explainable, Interpretable, and Trustworthy AI for the Earth Sciences

Amy McGovern


Abstract:

The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) is developing trustworthy AI for a wide variety of weather, climate, and coastal phenomena. In this talk, I will briefly overview AI2ES and then focus specifically on our recent developments in developing explainable and interpretable AI for the earth sciences. Specifically, I will discuss recent work (by Mamalakis et al, 2021; 2022) developing benchmark datasets to objectively assess XAI methods for geoscientific applications, work by Flora et al (in prep) on developing a standard toolkit for XAI for the earth sciences and to assess the validity of XAI methods, applications of XAI to a 3D CNN coastal fog predictive model by Krell, Kamangir et al (submitted), and quantification of the sources of XAI uncertainty from deep ensembles by Gagne et al. (in prep). Because AI2ES is focused on developing AI that is deemed trustworthy by environmental forecasters and managers, I will also describe our preliminary findings on weather forecasters' perceptions of AI trustworthiness, explainability, and interpretability (Cains et al., 2022), in the comparative context of reviews of theoretical and empirical research on explainability, trust, and trustworthiness (Smith et al. in prep, Wirz et al. in prep).

Chat is not available.