Skip to yearly menu bar Skip to main content


Poster

OmnixR: Evaluating Omni-modality Language Models on Reasoning across Modalities

Lichang Chen · Hexiang Hu · Mingda Zhang · Yiwen Chen · Zifeng Wang · YANDONG LI · Pranav Shyam · Tianyi Zhou · Heng Huang · Ming-Hsuan Yang · Boqing Gong

Hall 3 + Hall 2B #52
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

We introduce \textbf{OmnixR}, an evaluation suite designed to benchmark state-of-the-art Omni-modality Language Models (OLMs), such as GPT-4o and Gemini. Evaluating OLMs, which integrate multiple modalities such as text, vision, and audio, presents unique challenges. Particularly, the user message might often consist of multiple modalities, such that OLMs have to establish holistic understanding and reasoning across modalities to accomplish the task.Existing benchmarks are limited to single-modality or dual-modality tasks (e.g., image+text or video+text), overlooking comprehensive multi-modal assessments of model reasoning.To address this, OmnixR offers two evaluation variants: (1) OmnixR-synth: a synthetic dataset generated automatically by translating text into multiple modalities—audio, images, video, and hybrids Omnify!. (2) OmnixR-real: a real-world dataset, manually curated and annotated by experts, for evaluating cross-modal reasoning in natural settings. OmnixR presents a unique evaluation towards assessing OLMs over a diverse mix of modalities, such as a question that involves video, audio, and text, providing a rigorous cross-modal reasoning testbed than any existing benchmarks.Our experiments find that all state-of-the-art OLMs struggles with OmnixR questions that require integrating information from multiple modalities to answer. Further analysis highlight differences in reasoning behavior and underscoring the challenges of omni-modal AI alignment.

Live content is unavailable. Log in and register to view live content