Skip to yearly menu bar Skip to main content


Poster

Beyond Interpretability: The Gains of Feature Monosemanticity on Model Robustness

Qi Zhang · Yifei Wang · Jingyi Cui · Xiang Pan · Qi Lei · Stefanie Jegelka · Yisen Wang

Hall 3 + Hall 2B #331
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Deep learning models often suffer from a lack of interpretability due to \emph{polysemanticity}, where individual neurons are activated by multiple unrelated semantics, resulting in unclear attributions of model behavior. Recent advances in \emph{monosemanticity}, where neurons correspond to consistent and distinct semantics, have significantly improved interpretability but are commonly believed to compromise accuracy. In this work, we challenge the prevailing belief of the accuracy-interpretability tradeoff, showing that monosemantic features not only enhance interpretability but also bring concrete gains in model performance of {\color{black} robustness-related tasks}. Across multiple robust learning scenarios—including input and label noise, few-shot learning, and out-of-domain generalization—our results show that models leveraging monosemantic features significantly outperform those relying on polysemantic features. Furthermore, we provide empirical and theoretical understandings on the robustness gains of feature monosemanticity. Our preliminary analysis suggests that monosemanticity, by promoting better separation of feature representations, leads to more robust decision boundaries {\color{black} under noise}. This diverse evidence highlights the \textbf{generality} of monosemanticity in improving model robustness. As a first step in this new direction, we embark on exploring the learning benefits of monosemanticity beyond interpretability, supporting the long-standing hypothesis of linking interpretability and robustness. Code is available at \url{https://github.com/PKU-ML/Monosemanticity-Robustness}.

Live content is unavailable. Log in and register to view live content