Skip to yearly menu bar Skip to main content


Poster

Bayesian Context Aggregation for Neural Processes

Michael Volpp · Fabian Flürenbrock · Lukas Grossberger · Christian Daniel · Gerhard Neumann

Virtual

Keywords: [ Deep Sets ] [ meta learning ] [ Latent Variable Models ] [ Aggregation Methods ] [ multi-task learning ] [ neural processes ]


Abstract:

Formulating scalable probabilistic regression models with reliable uncertainty estimates has been a long-standing challenge in machine learning research. Recently, casting probabilistic regression as a multi-task learning problem in terms of conditional latent variable (CLV) models such as the Neural Process (NP) has shown promising results. In this paper, we focus on context aggregation, a central component of such architectures, which fuses information from multiple context data points. So far, this aggregation operation has been treated separately from the inference of a latent representation of the target function in CLV models. Our key contribution is to combine these steps into one holistic mechanism by phrasing context aggregation as a Bayesian inference problem. The resulting Bayesian Aggregation (BA) mechanism enables principled handling of task ambiguity, which is key for efficiently processing context information. We demonstrate on a range of challenging experiments that BA consistently improves upon the performance of traditional mean aggregation while remaining computationally efficient and fully compatible with existing NP-based models.

Chat is not available.