MMSU: A Massive Multi-task Spoken Language Understanding and Reasoning Benchmark
Abstract
Speech inherently contains rich acoustic information that extends far beyond the textual language. In real-world spoken communication, effective interpretation often requires integrating semantic meaning (e.g., content), paralinguistic features (e.g., emotions, speed, pitch) and phonological characteristics (e.g., prosody, intonation, rhythm), which are embedded in speech. While recent multimodal Speech Large Language Models (SpeechLLMs) have demonstrated remarkable capabilities in processing audio, their ability to perform fine-grained perception and complex reasoning in natural speech remains largely unexplored. To address this gap, we introduce MMSU, a comprehensive benchmark designed specifically for understanding and reasoning in speech. MMSU comprises 5,000 meticulously curated audio-question-answer triplets across 47 distinct tasks. Notably, linguistic theory forms the foundation of speech language understanding (SLU), yet existing benchmarks have paid insufficient attention to this fundamental aspect and fail to capture the broader linguistic picture. To ground our benchmark in linguistic principles, we systematically incorporate a wide range of linguistic phenomena, including phonetics, prosody, rhetoric, syntactics, semantics, and paralinguistics. Through a rigorous evaluation of 22 advanced SpeechLLMs, we identify substantial room for improvement in existing models. MMSU establishes a new standard for comprehensive assessment of SLLU, providing valuable insights for developing more sophisticated human-AI speech interaction systems.