Poster
HiRA: Parameter-Efficient Hadamard High-Rank Adaptation for Large Language Models
Qiushi Huang · Tom Ko · Zhan ZHUANG · Lilian Tang · Yu Zhang
Hall 3 + Hall 2B #245
[
Abstract
]
Oral
presentation:
Oral Session 3E
Thu 24 Apr 7:30 p.m. PDT — 9 p.m. PDT
Fri 25 Apr midnight PDT
— 2:30 a.m. PDT
Thu 24 Apr 7:30 p.m. PDT — 9 p.m. PDT
Abstract:
We propose Hadamard High-Rank Adaptation (HiRA), a parameter-efficient fine-tuning (PEFT) method that enhances the adaptability of Large Language Models (LLMs). While Low-rank Adaptation (LoRA) is widely used to reduce resource demands, its low-rank updates may limit its expressiveness for new tasks. HiRA addresses this by using a Hadamard product to retain high-rank update parameters, improving the model capacity. Empirically, HiRA outperforms LoRA and its variants on several tasks, with extensive ablation studies validating its effectiveness. Our code is available at https://github.com/hqsiswiliam/hira.
Live content is unavailable. Log in and register to view live content