Poster
in
Workshop: Neural Network Weights as a New Data Modality
TeleLoRA: Teleporting Alignment across Large Language Models for Trojan Mitigation
Xiao Lin · Manoj Acharya · Anirban Roy · Susmit Jha
Keywords: [ Trojan mitigation ] [ Trojan detection ] [ hyper networks ] [ permutation symmetry ] [ backdoor mitigation ] [ weight space learning ] [ meta learning ]
Mitigating Trojans in Large Language Models (LLMs) is one of many tasks where alignment data is LLM specific, as different LLMs have different Trojan triggers and trigger behaviors to be removed. In this paper, we introduce \textbf{TeleLoRA} (\textbf{Tele}porting \textbf{Lo}w-\textbf{R}ank \textbf{A}daptation), a novel framework that synergizes model-specific alignment data across multiple LLMs to enable zero-shot Trojan mitigation on unseen LLMs without alignment data. TeleLoRA learns a unified generator of LoRA adaptor weights by leveraging local activation information across multiple LLMs. This generator is designed to be permutation symmetric for generalize across models with different architectures and sizes. We optimize the model design for memory efficiency, making it feasible to learn with large-scale LLMs with minimal computational resources.Experiments on LLM Trojan mitigation benchmarks demonstrate that TeleLoRA effectively reduces attack success rates while preserving the benign performance of the models.