Skip to yearly menu bar Skip to main content


Poster

Deep Weight Factorization: Sparse Learning Through the Lens of Artificial Symmetries

Chris Kolb · Tobias Weber · Bernd Bischl · David Rügamer

Hall 3 + Hall 2B #374
[ ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: Sparse regularization techniques are well-established in machine learning, yet their application in neural networks remains challenging due to the non-differentiability of penalties like the L1 norm, which is incompatible with stochastic gradient descent. A promising alternative is shallow weight factorization, where weights are decomposed into two factors, allowing for smooth optimization of L1-penalized neural networks by adding differentiable L2 regularization to the factors. In this work, we introduce deep weight factorization, extending previous shallow approaches to more than two factors. We theoretically establish equivalence of our deep factorization with non-convex sparse regularization and analyze its impact on training dynamics and optimization. Due to the limitations posed by standard training practices, we propose a tailored initialization scheme and identify important learning rate requirements necessary for training factorized networks.We demonstrate the effectiveness of our deep weight factorization through experiments on various architectures and datasets, consistently outperforming its shallow counterpart and widely used pruning methods.

Live content is unavailable. Log in and register to view live content