Grounding or Guessing? Visual Signals for Detecting Hallucinations in Sign Language Translation
Abstract
Hallucination, where models generate fluent text unsupported by visual evidence, remains a major flaw in vision–language models and is especially critical in sign language translation (SLT). In SLT, meaning depends on precise grounding in video, and gloss-free models are particularly vulnerable because they map continuous signer movements directly into natural language without intermediate gloss supervision. We argue that hallucination arises when models rely on language priors rather than visual input. To capture this, we propose a token-level measure of reliability that quantifies how much the decoder uses visual information. Our method combines feature-based sensitivity, which measures internal changes when video is masked, with counterfactual signals, which capture probability differences between clean and altered video inputs. These signals are aggregated into a sentence-level reliability score, providing a compact and interpretable measure of visual grounding. We evaluate the proposed measure on two SLT benchmarks (PHOENIX-2014T and CSL-Daily) with both gloss-based and gloss-free models. Our results show that reliability predicts hallucination rates, generalizes across datasets and architectures, and decreases under visual degradations. Beyond these quantitative trends, we also find that reliability distinguishes grounded tokens from guessed ones, allowing risk estimation without references; when combined with text-based signals (confidence, perplexity, or entropy), it further improves hallucination risk estimation. Qualitative analysis further highlights why gloss-free models are more susceptible to hallucinations. Taken together, our findings establish reliability as a practical and reusable tool for diagnosing hallucinations in SLT, and lay groundwork for more robust hallucination detection in multimodal generation.