CloDS: Visual-Only Unsupervised Cloth Dynamics Learning in Unknown Conditions
Abstract
Deep learning has demonstrated remarkable capabilities in simulating complex dynamic systems. However, existing methods require known physical properties as supervision or inputs, and this dependence limits their applicability under unknown conditions. To explore this challenge, we introduce Cloth Dynamics Grounding (CDG), a novel scenario that involves unsupervised learning of cloth dynamics from sparse multi-view visual observations. We further propose Cloth Dynamics Splatting (CloDS), an unsupervised dynamic learning framework designed for CDG. To enable unsupervised learning of cloth dynamics, we develop a three-stage training framework for CloDS. Moreover, to address the challenges posed by large non-linear deformations and severe self-occlusions in CDG, we introduce a dual-position opacity modulation that supports bidirectional mapping between 2D observations and 3D geometry via mesh-based Gaussian splatting. It jointly considers the absolute and relative position of Gaussian components. Comprehensive experimental evaluations demonstrate that CloDS effectively learns cloth dynamics from visual data while maintaining strong generalization capabilities for unseen configurations. Our code is available at https://anonymous.4open.science/r/CloDSICLR/. Visualization results are available at https://anonymous.4open.science/r/CloDSvideo_ICLR/.