Invited Talk
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy
Sinho Chewi
[
Abstract
]
2025 Invited Talk
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy
Abstract:
Prior works have shown that successful estimation of the score functions along a diffusion model leads to efficient sampling under minimal data assumptions. Viewed as a result on distribution learning: score estimation implies that one can learn a sampler. In this work, we show that score estimation also leads to two other forms of distribution learning, namely parameter recovery and density estimation. Thus, score estimation has powerful algorithmic consequences, which in turn makes it susceptible to computational intractability results stemming from cryptographic assumptions. This is joint work with Alkis Kalavasis, Anay Mehrotra, and Omar Montasser.
Chat is not available.
Successful Page Load