Skip to yearly menu bar Skip to main content


Poster

REEF: Representation Encoding Fingerprints for Large Language Models

Jie Zhang · Dongrui Liu · Chen Qian · Linfeng Zhang · Yong Liu · Yu Qiao · Jing Shao

Hall 3 + Hall 2B #254
[ ] [ Project Page ]
Thu 24 Apr 7 p.m. PDT — 9:30 p.m. PDT
 
Oral presentation: Oral Session 4A
Fri 25 Apr 12:30 a.m. PDT — 2 a.m. PDT

Abstract:

Protecting the intellectual property of open-source Large Language Models (LLMs) is very important, because training LLMs costs extensive computational resources and data. Therefore, model owners and third parties need to identify whether a suspect model is a subsequent development of the victim model. To this end, we propose a training-free REEF to identify the relationship between the suspect and victim models from the perspective of LLMs' feature representations. Specifically, REEF computes and compares the centered kernel alignment similarity between the representations of a suspect model and a victim model on the same samples. This training-free REEF does not impair the model's general capabilities and is robust to sequential fine-tuning, pruning, model merging, and permutations. In this way, REEF provides a simple and effective way for third parties and models' owners to protect LLMs' intellectual property together. Our code is publicly accessible at https://github.com/AI45Lab/REEF.

Live content is unavailable. Log in and register to view live content