Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Pareto Low-Rank Adapters: Efficient Multi-Task Learning with Preferences

Nikolaos Dimitriadis · Pascal Frossard · François Fleuret

Hall 3 + Hall 2B #142
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract: Multi-task trade-offs in machine learning can be addressed via Pareto Front Learning (PFL) methods that parameterize the Pareto Front (PF) with a single model. PFL permits to select the desired operational point during inference, contrary to traditional Multi-Task Learning (MTL) that optimizes for a single trade-off decided prior to training. However, recent PFL methodologies suffer from limited scalability, slow convergence, and excessive memory requirements, while exhibiting inconsistent mappings from preference to objective space. We introduce PaLoRA, a novel parameter-efficient method that addresses these limitations in two ways. First, we augment any neural network architecture with task-specific low-rank adapters and continuously parameterize the Pareto Front in their convex hull. Our approach steers the original model and the adapters towards learning general and task-specific features, respectively. Second, we propose a deterministic sampling schedule of preference vectors that reinforces this division of labor, enabling faster convergence and strengthening the validity of the mapping from preference to objective space throughout training. Our experiments show that PaLoRA outperforms state-of-the-art MTL and PFL baselines across various datasets, scales to large networks, reducing the memory overhead 23.831.7 times compared with competing PFL baselines in scene understanding benchmarks.

Live content is unavailable. Log in and register to view live content