Building Physical AI at Scale: Data, Infrastructure, and Evaluation for the Real World
Abstract
Physical AI — robots, autonomous vehicles, and embodied agents — is approaching a genuine inflection point. Foundation models for real-world interaction are becoming viable, hardware costs are dropping, and developer interest is surging. Yet most teams building in this space are still stitching together their development stack from incompatible pieces, and it is slowing them down. The core bottlenecks are well understood but rarely addressed together. Real-world robotic behavior cannot be learned from synthetic data alone — collecting, annotating, and validating diverse physical-world data at scale is a full operational discipline. Training multimodal vision-language-action models demands infrastructure purpose-built for the task. And evaluating whether a model actually works in the physical world requires benchmarking approaches that go far beyond standard leaderboards. This social will bring together researchers and practitioners to examine all three problems in parallel. Short talks from speakers with hands-on experience in physical AI development will cover the state of real-world data pipelines, what purpose-built infrastructure for physical AI actually looks like, and how the community is approaching evaluation for embodied systems. An open discussion will follow, focused on where the biggest unsolved problems lie and how the research community can contribute.
Log in and register to view live content
| ICLR uses cookies for essential functions only. We do not sell your personal information. Our Privacy Policy » |