Poster
in
Workshop: 5th Workshop on practical ML for limited/low resource settings (PML4LRS) @ ICLR 2024
SSM Meets Video Diffusion Models: Efficient Video Generation with Structured State Spaces
Yuta Oshima · Shohei Taniguchi · Masahiro Suzuki · Yutaka Matsuo
Given the remarkable achievements in image generation through diffusion models, the research community has shown increasing interest in extending these models to video generation.Recent diffusion models for video generation have predominantly utilized attention layers to extract temporal features.However, attention layers are limited by their memory consumption, which increases quadratically with the length of the sequence.This limitation presents significant challenges when attempting to generate longer video sequences using diffusion models.To overcome this challenge, we propose leveraging state-space models (SSMs). SSMs have recently gained attention as viable alternatives due to their linear memory consumption relative to sequence length.In the experiments, we first evaluate our SSM-based model with UCF101, a standard benchmark of video generation.In addition, to investigate the potential of SSMs for longer video generation, we perform an experiment using the MineRL Navigate dataset, varying the number of frames to 64, 200, and 400.In these settings, our SSM-based model can considerably save memory consumption for longer sequences, while maintaining competitive FVD scores to the attention-based models.