Skip to yearly menu bar Skip to main content


Poster

Gaussian Process Behaviour in Wide Deep Neural Networks

Alexander Matthews · Jiri Hron · Mark Rowland · Richard E Turner · Zoubin Ghahramani

East Meeting level; 1,2,3 #37

Abstract:

Whilst deep neural networks have shown great empirical success, there is still much work to be done to understand their theoretical properties. In this paper, we study the relationship between Gaussian processes with a recursive kernel definition and random wide fully connected feedforward networks with more than one hidden layer. We exhibit limiting procedures under which finite deep networks will converge in distribution to the corresponding Gaussian process. To evaluate convergence rates empirically, we use maximum mean discrepancy. We then exhibit situations where existing Bayesian deep networks are close to Gaussian processes in terms of the key quantities of interest. Any Gaussian process has a flat representation. Since this behaviour may be undesirable in certain situations we discuss ways in which it might be prevented.

Live content is unavailable. Log in and register to view live content