Catch, Adapt, and Operate: Monitoring ML Models Under Drift
Abstract
Machine learning systems are increasingly deployed in high-stakes domains such as healthcare, finance, robotics, and autonomous systems, where data distributions evolve continuously. Without robust monitoring and timely adaptation, even high-performing models can degrade silently, compromising reliability, safety, and fairness. Continuous monitoring is therefore an absolute necessity. While there has been rapid progress in drift detection, test-time and continual adaptation, and the deployment of ML systems at scale, these topics are often studied separately. The Catch, Adapt, and Operate workshop brings them together around three themes: sensing drift through statistical and representation-based monitoring, responding through adaptive and self-supervised updates, and operating at scale in production pipelines. By connecting theory, systems, and real-world practice, the workshop aims to build a shared foundation for reliable, fair, and continuously adaptive machine learning under real-world drift.