ICLR 2024
Skip to yearly menu bar Skip to main content


Workshop

Secure and Trustworthy Large Language Models

Yisen Wang

Schubert 5
[ Abstract ]
Sat 11 May, midnight PDT

Large Language Models (LLMs) have emerged as transformative tools in natural language processing, redefining benchmarks across tasks from machine translation to dialog systems. However, with these advancements come intricate challenges centered around the security, transparency, and ethical dimensions of LLMs. These challenges, ranging from biases and misinformation dissemination to vulnerabilities against sophisticated attacks, have garnered considerable research attention. Our proposed workshop seeks to shine a spotlight on these pivotal issues, focusing on a myriad of topics including, but not limited to, LLM reliability, interpretability, backdoor defenses, and emerging learning paradigms. This assembly aims to bridge gaps between academia and industry, offering a platform for rigorous discussion, collaborative brainstorming, and a showcase of the latest research breakthroughs. Through this endeavor, we aspire to pave a pathway towards more secure, transparent, and ethically-grounded developments in LLMs, underlining the importance of collaborative, cross-disciplinary efforts in the process.

Live content is unavailable. Log in and register to view live content