Skip to yearly menu bar Skip to main content


Poster
in
Workshop: VerifAI: AI Verification in the Wild

Lightweight Latent Verifiers for Efficient Meta-Generation Strategies

Bartosz Piotrowski · Witold Drzewakowski · Konrad Staniszewski · Piotr Miłoś


Abstract: We study *verifiers* understood as auxiliary models estimating the correctness of outputs generated by base large language models (LLMs). Such approximate verifiers are crucial in many strategies for solving reasoning-intensive problems with LLMs. Typically, verifiers are LLMs themselves, often as large (or larger) than the base model they support, making them computationally expensive. In this work, we introduce a novel lightweight verification approach, LiLaVe, which reliably extracts correctness signals from the hidden states of the base LLM. A key advantage of LiLaVe is its ability to operate with only a small fraction of the computational budget required by traditional LLM-based verifiers. To demonstrate its practicality, we couple LiLaVe with popular meta-generation strategies, like best-of-$n$ or self-consistency. We also design novel LiLaVe-based approaches, like *conditional self-correction* or *conditional majority voting*, that improve both accuracy and efficiency in generation tasks with smaller LLMs. Our work opens the door to scalable and resource-efficient solutions for reasoning-intensive applications.

Chat is not available.