Skip to yearly menu bar Skip to main content

Long Oral
Workshop: Trustworthy Machine Learning for Healthcare

Explaining Multiclass Classifiers with Categorical Values: A Case Study in Radiography

Luca Franceschi · Cemre Zor · Muhammad Bilal Zafar · Gianluca Detommaso · Cedric Archambeau · Tamas Madl · Michele Donini · Matthias Seeger


Explainability of machine learning methods is of fundamental importance in healthcare to calibrate trust. A large branch of explainable machine learning uses tools linked to the Shapley value, which have nonetheless been found difficult to interpret and potentially misleading. Taking multiclass classification as a refer- ence task, we argue that a critical issue in these methods is that they disregard the structure of the model outputs. We develop the Categorical Shapley value as a theoretically-grounded method to explain the output of multiclass classifiers, in terms of transition (or flipping) probabilities across classes. We demonstrate on a case study composed of three example scenarios for pneumonia detection and subtyping using X-ray images.

Chat is not available.