Skip to yearly menu bar Skip to main content


Poster

Parameter-Efficient Multi-Task Model Fusion with Partial Linearization

Anke Tang · Li Shen · Yong Luo · Yibing Zhan · Han Hu · Bo Du · Yixin Chen · Dacheng Tao

Halle B #185

Abstract:

Large pre-trained models have enabled significant advances in machine learning and served as foundation components.Model fusion methods, such as task arithmetic, have been proven to be powerful and scalable to incorporate fine-tuned weights from different tasks into a multi-task model. However, efficiently fine-tuning large pre-trained models on multiple downstream tasks remains challenging, leading to inefficient multi-task model fusion.In this work, we propose a novel method to improve multi-task fusion for parameter-efficient fine-tuning techniques like LoRA fine-tuning.Specifically, our approach partially linearizes only the adapter modules and applies task arithmetic over the linearized adapters.This allows us to leverage the the advantages of model fusion over linearized fine-tuning, while still performing fine-tuning and inference efficiently.We demonstrate that our partial linearization technique enables a more effective fusion of multiple tasks into a single model, outperforming standard adapter tuning and task arithmetic alone.Experimental results demonstrate the capabilities of our proposed partial linearization technique to effectively construct unified multi-task models via the fusion of fine-tuned task vectors. We evaluate performance over an increasing number of tasks and find that our approach outperforms standard parameter-efficient fine-tuning techniques. The results highlight the benefits of partial linearization for scalable and efficient multi-task model fusion.

Chat is not available.