CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning

Rohit Girdhar, Deva Ramanan

Keywords: computer vision, implicit bias, reasoning, video understanding

Wednesday: Actions and Counterfactuals

Abstract: Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets. However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements. Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive. We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure. In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved. Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable. Using CATER, we provide insights into some of the most recent state of the art deep video architectures.

Similar Papers

Scaling Autoregressive Video Models
Dirk Weissenborn, Oscar Täckström, Jakob Uszkoreit,
AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures
Michael S. Ryoo, AJ Piergiovanni, Mingxing Tan, Anelia Angelova,
SCALOR: Generative World Models with Scalable Object Representations
Jindong Jiang, Sepehr Janghorbani, Gerard De Melo, Sungjin Ahn,