Efficient Autoregressive Inference for Transformer Probabilistic Models
Conor Hassan · Nasrulloh Satrio · Cen-You Li · Daolang Huang · Paul Chang · Yang Yang · Francesco Silvestrin · Samuel Kaski · Luigi Acerbi
Abstract
Transformer-based models for amortized probabilistic inference, such as neural processes, prior-fitted networks, and tabular foundation models, excel at single-pass *marginal* prediction. However, many real-world applications require coherent *joint distributions* that capture dependencies between predictions. While purely autoregressive architectures efficiently generate such distributions, they sacrifice the flexible set-conditioning that makes these models powerful for meta-learning. Conversely, the standard approach to obtain joint distributions from set-based models requires expensive re-encoding of an updated context set at each autoregressive step. We introduce a *causal autoregressive buffer* that preserves the advantages of both paradigms. Our approach decouples context encoding from updating the conditioning set. The model processes the context once and caches it, while a dynamic buffer captures target dependencies: as targets are incorporated, they enter the buffer and attend to both the cached context and previously buffered targets. This enables efficient batched autoregressive generation and one-pass joint predictive density evaluation. Training seamlessly integrates set-based and autoregressive modes at minimal additional cost. Across synthetic functions, EEG signals, cognitive models, and tabular data, our method matches the predictive accuracy of strong baselines while delivering up to $20\times$ faster joint sampling.
Successful Page Load