Workshop
Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI
Grigorios Chrysos · Yixuan Li · Anastasios Angelopoulos · Stephen Bates · Barbara Plank · Mohammad Emtiyaz Khan
Topaz Concourse
Sat 26 Apr, 6 p.m. PDT
How can we trust large language models (LLMs) when they generate text with confidence, but sometimes hallucinate or fail to recognize their own limitations? As foundation models like LLMs and multimodal systems become pervasive across high-stakes domains—from healthcare and law to autonomous systems—the need for uncertainty quantification (UQ) is more critical than ever. Uncertainty quantification provides a measure of how much confidence a model has in its predictions or generations, allowing users to assess when to trust the outputs and when human oversight may be needed. This workshop aims to focus on the question of UQ and hallucination in the modern LLMs and multimodal systems and explore the open questions in the domain.
Live content is unavailable. Log in and register to view live content