Skip to yearly menu bar Skip to main content


Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization

Ravi Srinivasan · Francesca Mignacco · Martino Sorbaro · Maria Refinetti · Avi Cooper · Gabriel Kreiman · Giorgia Dellaferrera

Halle B #205
[ ]
Wed 8 May 1:45 a.m. PDT — 3:45 a.m. PDT


"Forward-only" algorithms, which train neural networks while avoiding a backward pass, have recently gained attention as a way of solving the biologically unrealistic aspects of backpropagation. Here, we first address compelling challenges related to the "forward-only" rules, which include reducing the performance gap with backpropagation and providing an analytical understanding of their dynamics. To this end, we show that the forward-only algorithm with top-down feedback is well-approximated by an "adaptive-feedback-alignment" algorithm, and we analytically track its performance during learning in a prototype high-dimensional setting. Then, we compare different versions of forward-only algorithms, focusing on the Forward-Forward and PEPITA frameworks, and we show that they share the same learning principles. Overall, our work unveils the connections between three key neuro-inspired learning rules, providing a link between "forward-only" algorithms, i.e., Forward-Forward and PEPITA, and an approximation of backpropagation, i.e., Feedback Alignment.

Chat is not available.