Skip to yearly menu bar Skip to main content


Workshop

VerifAI: AI Verification in the Wild

Celine Lee · Wenting Zhao · Ameesh Shah · Theo X. Olausson · Tao Yu · Sean Welleck

Garnet 218-219

Sat 26 Apr, 5:55 p.m. PDT

This workshop explores the intersection of scale-driven generative artificial intelligence (AI) and the correctness-focused principles of verification. Formal analysis tools such as theorem provers, satisfiability solvers, and execution monitoring have demonstrated success in ensuring properties of interest across a range of tasks in software development and mathematics where precise reasoning is necessary. However, these methods face scaling challenges. Recently, generative AI such as large language models (LLMs) has been explored as a scalable and adaptable option to create solutions in these settings. The effectiveness of AI in these settings increases with more compute and data, but unlike traditional formalisms, they are built around probabilistic methods – not correctness by construction. In the VerifAI: AI Verification in the Wild workshop we invite papers and discussions that discuss how to bridge these two fields. Potential angles include, but are not limited to the following: generative AI for formal methods, formal methods for generative AI, AI as verifiers, datasets and benchmarks, and a special theme: LLMs for code generation. We welcome novel methodologies, analytic contributions, works in progress, negative results, andreview and positional papers that will foster discussion. We will also have a track for tiny or short papers.

Live content is unavailable. Log in and register to view live content

Timezone: America/Los_Angeles

Schedule

Log in and register to view live content