On Fairness of Low-Rank Adaptation of Large Models
CoRR(2024)
Abstract
Low-rank adaptation of large models, particularly LoRA, has gained traction
due to its computational efficiency. This efficiency, contrasted with the
prohibitive costs of full-model fine-tuning, means that practitioners often
turn to LoRA and sometimes without a complete understanding of its
ramifications. In this study, we focus on fairness and ask whether LoRA has an
unexamined impact on utility, calibration, and resistance to membership
inference across different subgroups (e.g., genders, races, religions) compared
to a full-model fine-tuning baseline. We present extensive experiments across
vision and language domains and across classification and generation tasks
using ViT-Base, Swin-v2-Large, Llama-2 7B, and Mistral 7B. Intriguingly,
experiments suggest that while one can isolate cases where LoRA exacerbates
model bias across subgroups, the pattern is inconsistent – in many cases, LoRA
has equivalent or even improved fairness compared to the base model or its full
fine-tuning baseline. We also examine the complications of evaluating
fine-tuning fairness relating to task design and model token bias, calling for
more careful fairness evaluations in future work.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined