Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Neural Network Weights as a New Data Modality

GradMetaNet: An Equivariant Architecture for Learning on Gradients

Yoav Gelberg · Yam Eitan · Aviv Navon · Aviv Shamsian · Theo Putterman · Haggai Maron

Keywords: [ curvature ] [ model editing ] [ parameter symmetries ] [ Equivariance ] [ learned optimizers ] [ gradients ]


Abstract:

Gradients of neural networks encode valuable information for optimization, editing, and analysis of models. Therefore, practitioners often treat gradients as inputs to task-specific algorithms, e.g., using gradient statistics for pruning and optimization. Recent works explore learning algorithms that operate directly on gradients but use architectures that are not specifically designed for gradient processing, hindering their applicability. In this paper, we present a principled approach for designing architectures that process gradients. Our approach is guided by three principles: (1) equivariant design that preserves neuron permutation symmetries, (2) processing sets of gradients across multiple data points to capture curvature information, and (3) efficient gradient representation through rank-1 decomposition. Based on these principles, we introduce GradMetaNet, a novel architecture for learning on gradients, constructed from simple equivariant blocks. We prove universality results for GradMetaNet, and show that previous approaches cannot approximate natural gradient-based functions. We then demonstrate GradMetaNet's effectiveness on a diverse set of gradient-based tasks on MLPs and transformers, such as learned optimization, INR editing, and estimating loss landscape curvature.

Chat is not available.