Skip to yearly menu bar Skip to main content


Poster
in
Workshop: 2nd Workshop on Mathematical and Empirical Understanding of Foundation Models

On Fairness Implications and Evaluations of Low-Rank Adaptation of Large Models

Ken Liu · Zhoujie Ding · Berivan Isik · Sanmi Koyejo


Abstract:

Low-rank adaptation of large models for downstream tasks, as exemplified by LoRA, has gained traction due to its computational efficiency.This efficiency, contrasted with the prohibitive costs of full-model fine-tuning, means that practitioners often turn to LoRA, sometimes without fully exploring its ramifications.In this pilot study, we focus on the fairness implications of LoRA, examining its impact on the performance of different subgroups for a given fine-tuning task compared to a full-model fine-tuning baseline.We conduct extensive experiments across vision and language domains and classification and generation tasks on ViT-Base, Swin-v2-Large, Llama-2 7B, and Mistral 7B.Our findings reveal a nuanced landscape: while it is possible to cherry-pick specific instances where LoRA exacerbates bias among subgroups, we found no significant evidence suggesting a consistent pattern of such disparities across the board.Our study also highlights challenges in assessing fine-tuning fairness for generative tasks in terms of task design and model token bias, urging more rigorous and careful fairness evaluations.

Chat is not available.