Skip to yearly menu bar Skip to main content


Poster

Words in Motion: Extracting Interpretable Control Vectors for Motion Transformers

Omer Sahin Tas · Royden Wagner

Hall 3 + Hall 2B #303
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Transformer-based models generate hidden states that are difficult to interpret. In this work, we analyze hidden states and modify them at inference, with a focus on motion forecasting. We use linear probing to analyze whether interpretable features are embedded in hidden states. Our experiments reveal high probing accuracy, indicating latent space regularities with functionally important directions. Building on this, we use the directions between hidden states with opposing features to fit control vectors. At inference, we add our control vectors to hidden states and evaluate their impact on predictions. Remarkably, such modifications preserve the feasibility of predictions. We further refine our control vectors using sparse autoencoders (SAEs). This leads to more linear changes in predictions when scaling control vectors. Our approach enables mechanistic interpretation as well as zero-shot generalization to unseen dataset characteristics with negligible computational overhead.

Live content is unavailable. Log in and register to view live content