TropEx: An Algorithm for Extracting Linear Terms in Deep Neural Networks

Martin Trimmel · Henning Petzka · Cristian Sminchisescu

Keywords: [ deep neural networks ] [ piecewise linear function ] [ relu network ] [ rectified linear unit ] [ linear terms ] [ tropical function ] [ linear regions ] [ deep learning theory ]


Deep neural networks with rectified linear (ReLU) activations are piecewise linear functions, where hyperplanes partition the input space into an astronomically high number of linear regions. Previous work focused on counting linear regions to measure the network's expressive power and on analyzing geometric properties of the hyperplane configurations. In contrast, we aim to understand the impact of the linear terms on network performance, by examining the information encoded in their coefficients. To this end, we derive TropEx, a nontrivial tropical algebra-inspired algorithm to systematically extract linear terms based on data. Applied to convolutional and fully-connected networks, our algorithm uncovers significant differences in how the different networks utilize linear regions for generalization. This underlines the importance of systematic linear term exploration, to better understand generalization in neural networks trained with complex data sets.

Chat is not available.