Skip to yearly menu bar Skip to main content


Poster

Feed-forward Propagation in Probabilistic Neural Networks with Categorical and Max Layers

Alexander (Oleksandr) Shekhovtsov · Boris Flach

Great Hall BC #60

Keywords: [ softmax ] [ uncertainty ] [ bayesian ] [ probabilistic neural network ] [ dropout ] [ argmax ] [ logsumexp ]


Abstract:

Probabilistic Neural Networks deal with various sources of stochasticity: input noise, dropout, stochastic neurons, parameter uncertainties modeled as random variables, etc. In this paper we revisit a feed-forward propagation approach that allows one to estimate for each neuron its mean and variance w.r.t. all mentioned sources of stochasticity. In contrast, standard NNs propagate only point estimates, discarding the uncertainty. Methods propagating also the variance have been proposed by several authors in different context. The view presented here attempts to clarify the assumptions and derivation behind such methods, relate them to classical NNs and broaden their scope of applicability. The main technical contributions are new approximations for the distributions of argmax and max-related transforms, which allow for fully analytic uncertainty propagation in networks with softmax and max-pooling layers as well as leaky ReLU activations. We evaluate the accuracy of the approximation and suggest a simple calibration. Applying the method to networks with dropout allows for faster training and gives improved test likelihoods without the need of sampling.

Live content is unavailable. Log in and register to view live content