Skip to yearly menu bar Skip to main content


Workshop

Local Explanation Methods for Deep Neural Networks Lack Sensitivity to Parameter Values

Julius Adebayo · Justin Gilmer · Ian Goodfellow · Been Kim

East Meeting Level 8 + 15 #9

Mon 30 Apr, 11 a.m. PDT

Explaining the output of a complicated machine learning model like a deep neural network (DNN) is a central challenge in machine learning. Several proposed local explanation methods address this issue by identifying what dimensions of a single input are most responsible for a DNN's output. The goal of this work is to assess the sensitivity of local explanations to DNN parameter values. Somewhat surprisingly, we find that DNNs with randomly-initialized weights produce explanations that are both visually and quantitatively similar to those produced by DNNs with learned weights. Our conjecture is that this phenomenon occurs because these explanations are dominated by the lower level features of a DNN, and that a DNN's architecture provides a strong prior which significantly affects the representations learned at these lower layers.

Live content is unavailable. Log in and register to view live content