Skip to yearly menu bar Skip to main content


Poster

Sparse Feature Circuits: Discovering and Editing Interpretable Causal Graphs in Language Models

Samuel Marks · Can Rager · Eric Michaud · Yonatan Belinkov · David Bau · Aaron Mueller

Hall 3 + Hall 2B #495
[ ] [ Project Page ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 5C
Fri 25 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

We introduce methods for discovering and applying sparse feature circuits. These are causally implicated subnetworks of human-interpretable features for explaining language model behaviors. Circuits identified in prior work consist of polysemantic and difficult-to-interpret units like attention heads or neurons, rendering them unsuitable for many downstream applications. In contrast, sparse feature circuits enable detailed understanding of unanticipated mechanisms in neural networks. Because they are based on fine-grained units, sparse feature circuits are useful for downstream tasks: We introduce SHIFT, where we improve the generalization of a classifier by ablating features that a human judges to be task-irrelevant. Finally, we demonstrate an entirely unsupervised and scalable interpretability pipeline by discovering thousands of sparse feature circuits for automatically discovered model behaviors.

Live content is unavailable. Log in and register to view live content