1st ICLR Workshop on Time Series in the Age of Large Models
Abstract
Summary: This workshop will delve into aspects of time series prediction and analysis in the age of large models. This workshop builds upon our successful track record of fostering community engagement around large models for time series. Our inaugural NeurIPS 2024 workshop demonstrated strong community interest, attracting 99 submissions and over 500 participants (~1000 registered interest via Whova). Submissions spanned the full spectrum of the field—from building time series foundation models and leveraging pre-trained models from other modalities, to real-world applications and deployment experiences. The rich discussions at NeurIPS 2024 revealed both significant opportunities and fundamental limitations in current approaches, directly informing the research questions we aim to address in this iteration. Building on this momentum, we also organized the successful ICML 2025 Workshop on Foundation Models for Structured Data, which broadened our perspective by connecting time series researchers with the tabular data community. Focus and Innovation: For ICLR 2026, we are strategically refocusing to dive deeper into outstanding research questions that emerged from our previous workshops - particularly around agents, interpretability, and context-informed predictions. This iteration features an evolved organizing team and fresh speaker lineup, reflecting the field's rapid development. The nascent nature of large time series models makes this workshop particularly timely for ICLR 2026, as the community continues to establish foundational principles and explore novel applications in this emerging domain. Organizer Expertise: The organizers bring extensive research experience and proven leadership in the time series foundation models domain, with diverse backgrounds from industry and academia. Collectively, we have led advances on 3 key dimensions: foundational model development– creating some of the first time series foundation models including Lag-Llama, Chronos, Moment, Moirai, and TimesFM, advanced applications– establishing initial frameworks for reasoning and agents in time series through MLZero and TimeSeriesGym, and rigorous evaluation and benchmarking using tools such as Context-is-Key, GIFT-Eval, TimeSeriesExam, and fev-bench. Beyond research contributions, our team has demonstrated success in organizing impactful workshops at premier venues, including the NeurIPS 2024 workshop on Time Series in the Age of Large Models, AAAI’24 Spring Symposium on Clinical Foundation Models, ICAIF’24 Foundation Models for Time Series: Exploring New Frontiers, and ICML’25 Workshop on Foundation Models for Structured Data. This combination of deep technical expertise and proven workshop leadership positions us to facilitate meaningful discussions and foster collaboration in this rapidly evolving field.