Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Plateau in Monotonic Linear Interpolation --- A "Biased" View of Loss Landscape for Deep Networks

Xiang Wang · Annie Wang · Mo Zhou · Rong Ge

MH1-2-3-4 #158

Keywords: [ Theory ] [ deep learning theory ] [ loss landscape ] [ monotonic linear interpolation ]


Abstract:

Monotonic linear interpolation (MLI) --- on the line connecting a random initialization with the minimizer it converges to, the loss and accuracy are monotonic --- is a phenomenon that is commonly observed in the training of neural networks. Such a phenomenon may seem to suggest that optimization of neural networks is easy. In this paper, we show that the MLI property is not necessarily related to the hardness of optimization problems, and empirical observations on MLI for deep neural networks depend heavily on the biases. In particular, we show that interpolating both weights and biases linearly leads to very different influences on the final output, and when different classes have different last-layer biases on a deep network, there will be a long plateau in both the loss and accuracy interpolation (which existing theory of MLI cannot explain). We also show how the last-layer biases for different classes can be different even on a perfectly balanced dataset using a simple model. Empirically we demonstrate that similar intuitions hold on practical networks and realistic datasets.

Chat is not available.