Abstract: When humans observe a physical system, they can easily locate components, understand their interactions, and anticipate future behavior, even in settings with complicated and previously unseen interactions. For computers, however, learning such models from videos in an unsupervised fashion is an unsolved research problem. In this paper, we present STOVE, a novel state-space model for videos, which explicitly reasons about objects and their positions, velocities, and interactions. It is constructed by combining an image model and a dynamics model in compositional manner and improves on previous work by reusing the dynamics model for inference, accelerating and regularizing training. STOVE predicts videos with convincing physical behavior over hundreds of timesteps, outperforms previous unsupervised models, and even approaches the performance of supervised baselines. We further demonstrate the strength of our model as a simulator for sample efficient model-based control, in a task with heavily interacting objects.

Similar Papers

CoPhy: Counterfactual Learning of Physical Dynamics
Fabien Baradel, Natalia Neverova, Julien Mille, Greg Mori, Christian Wolf,
Learning to Control PDEs with Differentiable Physics
Philipp Holl, Nils Thuerey, Vladlen Koltun,