Continuous multinomial logistic regression for neural decoding
Abstract
Multinomial logistic regression (MLR) is a classic model for multi-class classification that has been widely used for neural decoding. However, MLR requires a finite set of discrete output classes, limiting its applicability to settings with continuous-valued outputs (e.g., time, orientation, velocity, or spatial position), which are common in neuroscience settings. To address this limitation, we propose Continuous Multinomial Logistic Regression (CMLR), a generalization of logistic regression to continuous output spaces. CMLR represents a novel exponential-family model for conditional density estimation (CDE), mapping neural population activity to a full probability density over external covariates. It captures the influence of each neuron’s activity on the decoded variable through a smooth, interpretable tuning function, regularized by a Gaussian process prior. The resulting nonparametric decoding model flexibly captures asymmetric and multimodal densities, and accommodates both linear and circular variables. To illustrate the performance of CMLR, we applied it to large-scale datasets from mouse and monkey visual cortex, mouse hippocampus, and monkey motor cortex, where it generally outperformed a wide variety of other decoding methods, including deep neural networks (DNNs), XGBoost, and FlexCode. It also outperformed a closely-related correlation-blind decoder, highlighting the importance of correlations for accurate neural decoding. The CMLR model provides a scalable, flexible, and interpretable method for decoding continuous variables from diverse brain regions.