Skip to yearly menu bar Skip to main content


Poster

GeoLoRA: Geometric integration for parameter efficient fine-tuning

Steffen Schotthöfer · Emanuele Zangrando · Gianluca Ceruti · Francesco Tudisco · Jonas Kusch

Hall 3 + Hall 2B #457
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Low-Rank Adaptation (LoRA) has become a widely used method for parameter-efficient fine-tuning of large-scale, pre-trained neural networks. However, LoRA and its extensions face several challenges, including the need for rank adaptivity, robustness, and computational efficiency during the fine-tuning process. We introduce GeoLoRA, a novel approach that addresses these limitations by leveraging dynamical low-rank approximation theory. GeoLoRA requires only a single backpropagation pass over the small-rank adapters, significantly reducing computational cost as compared to similar dynamical low-rank training methods and making it faster than popular baselines such as AdaLoRA. This allows GeoLoRA to efficiently adapt the allocated parameter budget across the model, achieving smaller low-rank adapters compared to heuristic methods like AdaLoRA and LoRA, while maintaining critical convergence, descent, and error-bound theoretical guarantees. The resulting method is not only more efficient but also more robust to varying hyperparameter settings. We demonstrate the effectiveness of GeoLoRA on several state-of-the-art benchmarks, showing that it outperforms existing methods in bothaccuracy and computational efficiency

Live content is unavailable. Log in and register to view live content