Skip to yearly menu bar Skip to main content


Poster
in
Workshop: XAI4Science: From Understanding Model Behavior to Discovering New Scientific Knowledge

Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation

Leander Weber · Jim Berend · Moritz Weckbecker · Alexander Binder · Thomas Wiegand · Wojciech Samek · Sebastian Lapuschkin


Abstract:

Gradient-based optimization has been a cornerstone of machine learning enabling the vast advances of Artificial Intelligence (AI) development over the past decades. However, this type of optimization is energy- and memory-inefficient and requires differentiation, restricting applicability to highly efficient but non-differentiable (e.g. neuromorphic) architectures and corresponding specialized hardware. Such constraints can become limiting in energy-restricted applications such as edge devices for collecting scientific data in the field. We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors that utilizes local explainability to decompose a reward to individual neurons based on their respective contributions to solving a given task. While having comparable computational complexity to gradient descent, LFP does not require differentiation and produces sparse and efficient updates and models. We establish the convergence of LFP theoretically and empirically, validating its effectiveness on various models and datasets. Via two applications — the approximation-free training of Spiking Neural Networks (SNNs) and neural network pruning — we demonstrate that LFP combines increased flexibility with efficiency in terms of computation and representation.

Chat is not available.