Skip to yearly menu bar Skip to main content


Poster

ComLoRA: A Competitive Learning Approach for Enhancing LoRA

Qiushi Huang · Tom Ko · Lilian Tang · Yu Zhang

Hall 3 + Hall 2B #296
[ ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: We propose a Competitive Low-Rank Adaptation (ComLoRA) framework to address the limitations of the LoRA method, which either lacks capacity with a single rank-rr LoRA or risks inefficiency and overfitting with a larger rank-KrKr LoRA, where KK is an integer larger than 1. The proposed ComLoRA method initializes KK distinct LoRA components, each with rank rr, and allows them to compete during training. This competition drives each LoRA component to outperform the others, improving overall model performance. The best-performing LoRA is selected based on validation metrics, ensuring that the final model outperforms a single rank-rr LoRA and matches the effectiveness of a larger rank-KrKr LoRA, all while avoiding extra computational overhead during inference. To the best of our knowledge, this is the first work to introduce and explore competitive learning in the context of LoRA optimization. The ComLoRA's code is available at https://github.com/hqsiswiliam/comlora.

Live content is unavailable. Log in and register to view live content