Probability Distributions Computed by Autoregressive Transformers
Andy Yang ⋅ Anej Svete ⋅ Jiaoda Li ⋅ Anthony W. Lin ⋅ Jonathan Rawski ⋅ Ryan Cotterell ⋅ David Chiang
Abstract
Most expressivity results for transformers treat them as language recognizers—devices that accept or reject strings—rather than as they are used in practice: as language models that generate strings autoregressively and probabilistically. We characterize the probability distributions that transformer language models can express. We show that making transformer language recognizers autoregressive can sometimes increase their expressivity, and that making them probabilistic can break equivalences that hold in the non-probabilistic case. Our overall contribution is to tease apart what functions transformers are capable of expressing in their most common use case as language models.
Successful Page Load