Poster
in
Workshop: Secure and Trustworthy Large Language Models
Attacks on Third-Party APIs of Large Language Models
Wanru Zhao · Vidit Khazanchi · Haodi Xing · Xuanli He · Qiongkai Xu · Nic Lane
Large language model (LLM) services have recently begun offering a plugin ecosystem to interact with third-party API services. This innovation enhances the capabilities of LLMs but introduces risks since these plugins, developed by various third parties, cannot be easily trusted. This paper proposes a new attacking framework to examine security and safety vulnerabilities within LLM platforms that incorporate third-party services. Applying our framework specifically to widely used LLMs, we identify real-world malicious attacks across various domains on third-party APIs that can imperceptibly modify LLM outputs. The paper discusses the unique challenges posed by third-party API integration and offers strategic possibilities to improve the security and safety of LLM ecosystems moving forward.