We study two coarser approximations on top of a Kronecker factorization (K-FAC) of the Fisher information matrix, to scale up Natural Gradient to deep and wide Convolutional Neural Networks (CNNs). The first considers the activations (feature maps) as spatially uncorrelated while the second considers only correlations among groups of channels. Both variants yield a further block-diagonal approximation tailored for CNNs, which is much more efficient to compute and invert. Experiments on the VGG11 and ResNet50 architectures show the technique can substantially speed up both K-FAC and a baseline with Batch Normalization in wall-clock time, yielding faster convergence to similar or better generalization error.
Live content is unavailable. Log in and register to view live content