Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Tackling Climate Change with Machine Learning: Fostering the Maturity of ML Applications for Climate Change

Interpretable Machine Learning for power systems: Establishing Confidence in SHapley Additive exPlanationS

Tabia Ahmad · Robert Hamilton · Panagiotis N. Papadopoulos · Samuel Chevalier · Ilgiz Murzakhanov · Rahul Nellikkath · Jochen Stiasny · Spyros Chatzivasileiadis


Abstract:

Interpretable Machine Learning (IML) is expected to remove significant barriers for the application of Machine Learning (ML) algorithms in power systems. This work first seeks to showcase the benefits of SHapley Additive exPlanations (SHAP) for understanding the outcomes of ML models, which are increasingly being used to optimise power systems with increasing share of Renewable Energy (RE), to support worldwide calls for decarbonisation and climate change. To do so, we demonstrate that the Power Transfer Distribution Factors (PTDF)—a power system physics-based linear sensitivity index—can be derived from the SHAP values. To do so, we take the derivatives of SHAP values from a ML model trained to learn line-flows from generator power-injections, using a DC power-flow case in a benchmark test network. In demonstrating that SHAP values can be related back to the physics that underpin the power system, we build confidence in the explanations SHAP can offer.

Chat is not available.