Workshop
Building Trust in LLMs and LLM Applications: From Guardrails to Explainability to Regulation
Micah Goldblum · Ramasuri Narayanam · Bang An · Soumyabrata Pal · Martin Pawelczyk · Hima Lakkaraju · Shiv Saini
Hall 4 #6
Sun 27 Apr, 5:50 p.m. PDT
As Large Language Models (LLMs) are rapidly adopted across diverse industries, concerns around their trustworthiness, safety, and ethical implications increasingly motivate academic research, industrial development, and legal innovation. LLMs are increasingly integrated into complex applications, where they must navigate challenges related to data privacy, regulatory compliance, and dynamic user interactions. These complex applications amplify the potential of LLMs to violate the trust of humans. Ensuring the trustworthiness of LLMs is paramount as they transition from standalone tools to integral components of real-world applications used by millions.This workshop addresses the unique challenges posed by the deployment of LLMs, ranging from guardrails to explainability to regulation and beyond. The proposed workshop will bring together researchers and practitioners from academia and industry to explore cutting-edge solutions for improving the trustworthiness of LLMs and LLM-driven applications. The workshop will feature invited talks, a panel discussion, interactive breakout discussion sessions, and poster presentations, fostering rich dialogue and knowledge exchange. We aim to bridge the gap between foundational research and the practical challenges of deploying LLMs in trustworthy, use-centric systems.
Live content is unavailable. Log in and register to view live content