CogMoE: Signal-Quality–Guided Multimodal MoE for Cognitive Load Prediction
Abstract
The poor and variable quality of physiological signals fundamentally constrains reliable cognitive load (CL) prediction in real-world settings. In safety-critical tasks such as driving, degraded signal quality can severely compromise prediction accuracy, limiting the deployment of existing models outside controlled lab conditions. To address this challenge, we propose CogMoE, a signal-quality–guided Mixture-of-Experts (MoE) framework that dynamically adapts to heterogeneous and noisy inputs. CogMoE replaces conventional modality-based fusion with a quality-aware gating mechanism that integrates EEG, ECG, EDA, and gaze according to their estimated signal quality, shifting the basis of multimodal modeling from modality identity to signal quality. The framework operates in two stages: (1) quality-aware multimodal synchronization and recovery to mitigate artifacts, temporal misalignment, and missing data, and (2) signal-quality-specific expert modeling via a cross-modal MoE transformer that regulates information flow based on signal quality. To further improve stability, we introduce CORTEX Loss, which balances task accuracy, quality-aware representation refinement and expert utilization under noise. Experiments on CL-Drive and ADABase demonstrate that CogMoE outperforms strong baselines across all modality combinations and sequence lengths, consistently delivering improvements across diverse signal-quality conditions. Our code is publicly available at https://github.com/shahaamirbader/CogMoE.