Skip to yearly menu bar Skip to main content


Secure and Trustworthy Large Language Models

Yisen Wang · Ting Wang · Jinghui Chen · Chaowei Xiao · Jieyu Zhao · Nanyun (Violet) Peng · Yulia Tsvetkov · Anima Anandkumar

Schubert 5

Sat 11 May, midnight PDT

Large Language Models (LLMs) have emerged as transformative tools in natural language processing, redefining benchmarks across tasks from machine translation to dialog systems. However, with these advancements come intricate challenges centered around the security, transparency, and ethical dimensions of LLMs. These challenges, ranging from biases and misinformation dissemination to vulnerabilities against sophisticated attacks, have garnered considerable research attention. Our proposed workshop seeks to shine a spotlight on these pivotal issues, focusing on a myriad of topics including, but not limited to, LLM reliability, interpretability, backdoor defenses, and emerging learning paradigms. This assembly aims to bridge gaps between academia and industry, offering a platform for rigorous discussion, collaborative brainstorming, and a showcase of the latest research breakthroughs. Through this endeavor, we aspire to pave a pathway towards more secure, transparent, and ethically-grounded developments in LLMs, underlining the importance of collaborative, cross-disciplinary efforts in the process.

Chat is not available.
Timezone: America/Los_Angeles