Approximation and Learning with Deep Convolutional Models: a Kernel Perspective

Alberto Bietti

Keywords: [ Approximation ] [ generalization ] [ convolution ] [ deep learning theory ] [ kernel methods ]

[ Abstract ]
[ Visit Poster at Spot I3 in Virtual World ] [ OpenReview
Thu 28 Apr 10:30 a.m. PDT — 12:30 p.m. PDT


The empirical success of deep convolutional networks on tasks involving high-dimensional data such as images or audio suggests that they can efficiently approximate certain functions that are well-suited for such tasks. In this paper, we study this through the lens of kernel methods, by considering simple hierarchical kernels with two or three convolution and pooling layers, inspired by convolutional kernel networks. These achieve good empirical performance on standard vision datasets, while providing a precise description of their functional space that yields new insights on their inductive bias. We show that the RKHS consists of additive models of interaction terms between patches, and that its norm encourages spatial similarities between these terms through pooling layers. We then provide generalization bounds which illustrate how pooling and patches yield improved sample complexity guarantees when the target function presents such regularities.

Chat is not available.