Skip to yearly menu bar Skip to main content


Poster

Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks

José Antonio Oramas Mogrovejo · Kaili Wang · Tinne Tuytelaars

Great Hall BC #41

Keywords: [ explainable ai ] [ model interpretation ] [ model explanation ] [ evaluation ]


Abstract:

Visual Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this paper, we propose a novel scheme for both interpretation as well as explanation in which, given a pretrained model, we automatically identify internal features relevant for the set of classes considered by the model, without relying on additional annotations. We interpret the model through average visualizations of this reduced set of features. Then, at test time, we explain the network prediction by accompanying the predicted class label with supporting visualizations derived from the identified features. In addition, we propose a method to address the artifacts introduced by strided operations in deconvNet-based visualizations. Moreover, we introduce an8Flower , a dataset specifically designed for objective quantitative evaluation of methods for visual explanation. Experiments on the MNIST , ILSVRC 12, Fashion 144k and an8Flower datasets show that our method produces detailed explanations with good coverage of relevant features of the classes of interest.

Live content is unavailable. Log in and register to view live content