Skip to yearly menu bar Skip to main content


Poster

Composable Interventions for Language Models

Arinbjörn Kolbeinsson · Kyle O'Brien · Tianjin Huang · Shanghua Gao · Shiwei Liu · Jonathan Schwarz · Anurag Vaidya · Faisal Mahmood · Marinka Zitnik · Tianlong Chen · Thomas Hartvigsen

Hall 3 + Hall 2B #311
[ ] [ Project Page ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Test-time interventions for language models can enhance factual accuracy, mitigate harmful outputs, and improve model efficiency without costly retraining. But despite a flood of new methods, different types of interventions are largely developing independently.In practice, multiple interventions must be applied sequentially to the same model, yet we lack standardized ways to study how interventions interact. We fill this gap by introducing composable interventions, a framework to study the effects of using multiple interventions on the same language models, featuring new metrics and a unified codebase. Using our framework, we conduct extensive experiments and compose popular methods from three emerging intervention categories---knowledge editing, model compression, and machine unlearning. Our results over 417 different compositions uncover meaningful interactions: compression hinders editing and unlearning, composing interventions hinges on their order of application, and popular general-purpose metrics are inadequate for assessing composability. Taken together, our findings showcase clear gaps in composability, suggesting a need for new multi-objective interventions.

Live content is unavailable. Log in and register to view live content