Skip to yearly menu bar Skip to main content


Poster

Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning

Gangwei Jiang · caigao jiang · Zhaoyi Li · Siqiao Xue · JUN ZHOU · Linqi Song · Defu Lian · Ying Wei

Hall 3 + Hall 2B #292
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 1B
Wed 23 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

Catastrophic forgetting (CF) poses a significant challenge in machine learning, where a model forgets previously learned information upon learning new tasks. Despite the advanced capabilities of Large Language Models (LLMs), they continue to face challenges with CF during continual learning. The majority of existing research focuses on analyzing forgetting patterns through a singular training sequence, thereby overlooking the intricate effects that diverse tasks have on model behavior.Our study explores CF across various settings, discovering that model forgetting is influenced by both the specific training tasks and the models themselves. To this end, we interpret forgetting by examining the function vector (FV), a compact representation of functions in LLMs, offering a model-dependent indicator for the occurrence of CF. Through theoretical and empirical analyses, we demonstrated that CF in LLMs primarily stems from biases in function activation rather than the overwriting of task processing functions.Leveraging these insights, we propose a novel function vector guided training methodology, incorporating a regularization technique to stabilize the FV and mitigate forgetting. Empirical tests on four benchmarks confirm the effectiveness of our proposed training method, substantiating our theoretical framework concerning CF and model function dynamics.

Live content is unavailable. Log in and register to view live content