Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation

Satoki Ishikawa · Rio Yokota · Ryo Karakida

Hall 3 + Hall 2B #135
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: Local learning, which trains a network through layer-wise local targets and losses, has been studied as an alternative to backpropagation (BP) in neural computation. However, its algorithms often become more complex or require additional hyperparameters due to the locality, making it challenging to identify desirable settings where the algorithm progresses in a stable manner.To provide theoretical and quantitative insights, we introduce maximal update parameterization (μP) in the infinite-width limit for two representative designs of local targets: predictive coding (PC) and target propagation (TP). We verify that μP enables hyperparameter transfer across models of different widths.Furthermore, our analysis reveals unique and intriguing properties of μP that are not present in conventional BP. By analyzing deep linear networks, we find that PC's gradients interpolate between first-order and Gauss-Newton-like gradients, depending on the parameterization. We demonstrate that, in specific standard settings, PC in the infinite-width limit behaves more similarly to the first-order gradient.For TP, even with the standard scaling of the last layer differing from classical μP, its local loss optimization favors the feature learning regime over the kernel regime.

Live content is unavailable. Log in and register to view live content