Skip to yearly menu bar Skip to main content


Poster

Forking Paths in Neural Text Generation

Eric Bigelow · Ari Holtzman · Hidenori Tanaka · Tomer Ullman

Hall 3 + Hall 2B #207
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Estimating uncertainty in Large Language Models (LLMs) is important for properly evaluating LLMs, and ensuring safety for users. However, prior approaches to uncertainty estimation focus on the final answer in generated text, ignoring intermediate steps that might dramatically impact the outcome. We hypothesize that there exist key forking tokens, such that re-sampling the system at those specific tokens, but not others, leads to very different outcomes. To test this empirically, we develop a novel approach to representing uncertainty dynamics across individual tokens of text generation, and applying statistical models to test our hypothesis. Our approach is highly flexible: it can be applied to any dataset and any LLM, without fine tuning or accessing model weights. We use our method to analyze LLM responses on 7 different tasks across 4 domains, spanning a wide range of typical use cases. We find many examples of forking tokens, including surprising ones such as a space character instead of a colon, suggesting that LLMs are often just a single token away from saying something very different.

Live content is unavailable. Log in and register to view live content