Restricting the Flow: Information Bottlenecks for Attribution

Karl Schulz, Leon Sixt, Federico Tombari, Tim Landgraf

Keywords: information bottleneck, interpretability, information bottleneck, information theory

Tuesday: Feature Discovery for Structured Data

Abstract: Attribution methods provide insights into the decision-making of machine learning models like artificial neural networks. For a given input sample, they assign a relevance score to each individual input variable, such as the pixels of an image. In this work, we adopt the information bottleneck concept for attribution. By adding noise to intermediate feature maps, we restrict the flow of information and can quantify (in bits) how much information image regions provide. We compare our method against ten baselines using three different metrics on VGG-16 and ResNet-50, and find that our methods outperform all baselines in five out of six settings. The method’s information-theoretic foundation provides an absolute frame of reference for attribution values (bits) and a guarantee that regions scored close to zero are not necessary for the network's decision.

Similar Papers

The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget
Anirudh Goyal, Yoshua Bengio, Matthew Botvinick, Sergey Levine,