Skip to yearly menu bar Skip to main content


Poster

In-Context Editing: Learning Knowledge from Self-Induced Distributions

Siyuan Qi · Bangcheng Yang · Kailin Jiang · Xiaobo Wang · Jiaqi Li · Yifan Zhong · Yaodong Yang · Zilong Zheng

Hall 3 + Hall 2B #283
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

In scenarios where language models must incorporate new information efficiently without extensive retraining, traditional fine-tuning methods are prone to overfitting, degraded generalization, and unnatural language generation. To address these limitations, we introduce Consistent In-Context Editing (ICE), a novel approach leveraging the model's in-context learning capability to optimize towards a contextual distribution rather than a one-hot target. ICE introduces a simple yet effective optimization framework for the model to internalize new knowledge by aligning its output distributions with and without additional context. This method enhances the robustness and effectiveness of gradient-based tuning methods, preventing overfitting and preserving the model's integrity. We analyze ICE across four critical aspects of knowledge editing: accuracy, locality, generalization, and linguistic quality, demonstrating its advantages. Experimental results confirm the effectiveness of ICE and demonstrate its potential for continual editing, ensuring that the integrity of the model is preserved while updating information.

Live content is unavailable. Log in and register to view live content