Poster
in
Workshop: Second Workshop on Representational Alignment (Re$^2$-Align)
Understanding task representations in neural networks via Bayesian ablation
Andrew Nam · Declan Campbell · Thomas L. Griffiths · Jonathan Cohen · Sarah-Jane Leslie
Abstract:
Neural networks are powerful tools for cognitive modeling due to their flexibility and emergent properties. However, interpreting their learned representations remains challenging due to their sub-symbolic semantics. In this work, we introduce a novel probabilistic framework for interpreting latent task representations in neural networks. Inspired by Bayesian inference, our approach defines a distribution over representational units to infer their causal contributions to task performance. Using ideas from information theory, we propose a suite of tools and metrics to illuminate key model properties, including representational distributedness, manifold complexity, and polysemanticity.
Chat is not available.
Successful Page Load