Poster
in
Workshop: 7th Robot Learning Workshop: Towards Robots with Human-Level Abilities
World Models as Reference Trajectories for Rapid Motor Adaptation
Carlos Brito · Daniel McNamee
Deploying learned control policies in real-world robotics poses a fundamental challenge: when system dynamics change unexpectedly, performance degrades until models are retrained on new data. We introduce a dual control framework that uses world model predictions as implicit reference trajectories for rapid adaptation, while preserving the policy's optimal behavior. Our method separates the control problem into long-term reward maximization through reinforcement learning and robust motor execution through rapid latent control. In continuous control tasks under varying dynamics, this achieves significantly faster adaptation compared to model-based RL baselines while maintaining near-optimal performance. This dual architecture combines the benefits of flexible policy learning through reinforcement learning with the robust adaptation capabilities of classical control, providing a principled approach to maintaining performance in high-dimensional locomotion tasks under varying dynamics.