Invited Talk
in
Workshop: Frontiers in Probabilistic Inference: learning meets Sampling
Sampling multimodal distributions by denoising
Marylou Gabrié
in
Workshop: Frontiers in Probabilistic Inference: learning meets Sampling
Generative models parameterize flexible families of distributions capable of fitting complex datasets such as images or text. These models can generate independent samples from intricate high-dimensional distributions at a negligible cost. In contrast, sampling exactly from a given target distribution—such as the Boltzmann distribution of a physical system—is often a major challenge due to high dimensionality, multimodality, ill-conditioning, or a combination of these factors. This raises the question: How can generative models be leveraged to assist in the sampling task? A key difficulty in this setting is the lack of an extensive dataset to learn from upfront. In this talk, I will focus in particular on sampling from multimodal distributions and present recent attempts inspired by diffusion models to sample using a denoising process. The talk is mainly based on works with Louis Grenioux, Maxence Noble and Alain Oliviero Durmus.