In-Person Poster presentation / poster accept

CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers

Wenyi Hong · Ming Ding · Wendi Zheng · Xinghan Liu · Jie Tang

MH1-2-3-4 #36

Keywords: [ Applications ] [ pretraining ] [ text-to-video generation ]

[ Abstract ]
[ Poster [ OpenReview
Mon 1 May 2:30 a.m. PDT — 4:30 a.m. PDT


In this work, we present CogVideo, a 9B-parameter transformer for text-to-video generation. The CogVideo model has been trained by inheriting a pretrained text-to-image model, CogView2, which significantly reduces the training cost and alleviates the problem of scarcity and weak relevance. We also propose a multi-frame-rate training strategy for better aligning text and video clips. CogVideo achieves state-of-the-art performance in machine evaluation and outperforms publicly available models by a large margin in human evaluation. Its codes and model are also publicly available at

Chat is not available.