Skip to yearly menu bar Skip to main content


Poster

InstaSHAP: Interpretable Additive Models Explain Shapley Values Instantly

James Enouen · Yan Liu

Hall 3 + Hall 2B #503
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

In recent years, the Shapley value and SHAP explanations have emerged as oneof the most dominant paradigms for providing post-hoc explanations of blackbox models. Despite their well-founded theoretical properties, many recent workshave focused on the limitations in both their computational efficiency and theirrepresentation power. The underlying connection with additive models, however,is left critically under-emphasized in the current literature. In this work, we findthat a variational perspective linking GAM models and SHAP explanations is ableto provide deep insights into nearly all recent developments. In light of this connection, we borrow in the other direction to develop a new method to train interpretable GAM models which are automatically purified to compute the Shapleyvalue in a single forward pass. Finally, we provide theoretical results showing thelimited representation power of GAM models is the same Achilles’ heel existingin SHAP and discuss the implications for SHAP’s modern usage in CV and NLP.

Live content is unavailable. Log in and register to view live content