Poster
Turning Up the Heat: Min-p Sampling for Creative and Coherent LLM Outputs
Minh Nguyen · Andrew Baker · Clement Neo · Allen Roush · Andreas Kirsch · Ravid Shwartz-Ziv
Hall 3 + Hall 2B #237
Wed 23 Apr 7:30 p.m. PDT — 9 p.m. PDT
Large Language Models (LLMs) generate text by sampling the next token from a probability distribution over the vocabulary at each decoding step. Popular sampling methods like top-p (nucleus sampling) often struggle to balance quality and diversity, especially at higher temperatures which lead to incoherent or repetitive outputs. We propose min-p sampling, a dynamic truncation method that adjusts the sampling threshold based on the model's confidence by using the top token's probability as a scaling factor. Our experiments on benchmarks including GPQA, GSM8K, and AlpacaEval Creative Writing show that min-p sampling improves both the quality and diversity of generated text across different model families (Mistral and Llama 3) and model sizes (1B to 123B parameters), especially at higher temperatures. Human evaluations further show a clear preference for min-p sampling, in both text quality and creativity. Min-p sampling has been adopted by popular open-source LLM frameworks, including Hugging Face Transformers, VLLM, and many others, highlighting its significant impact on improving text generation quality.
Live content is unavailable. Log in and register to view live content