Poster
VideoGLUE: Video General Understanding Evaluation of Foundation Models
Boqing Gong · Yin Cui · Long Zhao · Tobias Weyand · Ming-Hsuan Yang · Liangzhe Yuan · Mikhail Sirotenko · Florian Schroff · Hao Zhou · Xuan Yang · Menglin Jia · Luke Friedman · Huisheng Wang · Hartwig Adam · Ting Liu · Lu Jiang · Nitesh Bharadwaj Gundavarapu
Hall 3 + Hall 2B #74
We evaluate the video understanding capabilities of existing foundation models (FMs) using a carefully designed experiment protocol consisting of three hallmark tasks (action recognition,temporal localization, and spatiotemporal localization), eight datasets well received by the community, and four adaptation methods tailoring an FM for downstream tasks. Furthermore,we jointly profile FMs’ efficacy and efficiency when adapting to general video understanding tasks using cost measurements during both training and inference. Our main findings areas follows. First, task-specialized models significantly outperform the seven FMs studied in this work, in sharp contrast to what FMs have achieved in natural language and image understanding. Second, video-native FMs, whose pretraining data mainly contains the video modality, are generally better than image-native FMs in classifying motion-rich videos,localizing actions in time, and understanding a video of more than one action. Third, the video-native FMs can perform well on video tasks under light adaptations to downstream tasks (e.g., freezing the FM backbones), while image-native FMs win in full end-to-end finetuning. The first two observations reveal the need and tremendous opportunities to conduct research on video-focused FMs, and the last confirms that both tasks and adaptation methods matter when it comes to the evaluation of FMs. Our code is released under: https://github.com/tensorflow/models/tree/master/official/projects/videoglue
Live content is unavailable. Log in and register to view live content