Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Models for Highly Structured Data

Video Diffusion Models

Jonathan Ho · Tim Salimans · Alexey Gritsenko · William Chan · Mohammad Norouzi · David Fleet


Abstract:

We present results on video generation using diffusion models. We propose an architecture for video diffusion models which is a natural extension of the standard image architecture, and we show that it is effective to jointly train on image and video modeling. We show how to generate long videos using a new conditioning technique which performs better than previously proposed methods, and we present results on text-conditioned video generation and state-of-the-art results on UCF101 unconditional video generation.

Chat is not available.