Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI

Conformal Structured Prediction

Botong Zhang · Shuo Li · Osbert Bastani

Keywords: [ Structured Prediction ] [ Conformal Prediction ] [ Integer Programming ]


Abstract:

Conformal prediction has recently emerged as a promising strategy for quantifying the uncertainty of a predictive model; these algorithms modify the model to output sets of labels that are guaranteed to contain the true label with high probability. However, existing conformal prediction algorithms have largely targeted classification and regression settings, where the structure of the prediction set has a simple form as a level set of the scoring function. However, for complex structured outputs such as text generation by large language models (LLMs), these prediction sets might include a large number of labels and therefore be hard for users to interpret. In this paper, we propose a general framework for conformal prediction in the structured prediction setting, that modifies existing conformal prediction algorithms to output structured prediction sets that implicitly represent sets of labels and encode uncertainty. In addition, we demonstrate how our approach can be applied in domains where the prediction sets can be represented as a set of nodes in a directed acyclic graph; for instance, in code generation, a prediction set can be an abstract syntax tree (AST) parsed from the generated code, with certain nodes removed to represent uncertain components. We demonstrate how our algorithm can be used to construct prediction sets that satisfy a desired coverage guarantee in two generative tasks.

Chat is not available.