Poster
CollabEdit: Towards Non-destructive Collaborative Knowledge Editing
Jiamu Zheng · Jinghuai Zhang · Tianyu Du · Xuhong Zhang · Jianwei Yin · Tao Lin
Hall 3 + Hall 2B #490
Collaborative learning of large language models (LLMs) has emerged as anew paradigm for utilizing private data from different parties to guaranteeefficiency and privacy. Meanwhile, Knowledge Editing (KE) for LLMs has alsogarnered increased attention due to its ability to manipulate the behaviors ofLLMs explicitly, yet leaves the collaborative KE case—in which knowledgeedits of multiple parties are aggregated in a privacy-preserving and continualmanner—unexamined. To this end, this manuscript dives into the first investigation of collaborative KE, in which we start by carefully identifying the uniquethree challenges therein, including knowledge overlap, knowledge conflict, andknowledge forgetting. We then propose a non-destructive collaborative KEframework, COLLABEDIT, which employs a novel model merging mechanismto mimic the global KE behavior while preventing the severe performance drop.Extensive experiments on two canonical datasets demonstrate the superiority ofCOLLABEDIT compared to other destructive baselines, and results shed light onaddressing three collaborative KE challenges and future applications. Our code isavailable at https://github.com/LINs-lab/CollabEdit.
Live content is unavailable. Log in and register to view live content