Skip to yearly menu bar Skip to main content


Poster

Intricacies of Feature Geometry in Large Language Models

Satvik Golechha · Lucius Bushnaq · Euan Ong · Neeraj Kayal · Nandi Schoots

Hall 3 + Hall 2B #208
[ ] [ Project Page ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Studying the geometry of a language model's embedding space is an important and challenging task because of the various ways concepts can be represented, extracted, and used. Specifically, we want a framework that unifies both measurement (of how well a latent explains a feature/concept) and causal intervention (how well it can be used to control/steer the model). We discuss several challenges with using some recent approaches to study the geometry of categorical and hierarchical concepts in large language models (LLMs) and both theoretically and empirically justify our main takeaway, which is that their orthogonality and polytopes results are trivially true in high-dimensional spaces, and can be observed even in settings where they should not occur.

Live content is unavailable. Log in and register to view live content