Skip to yearly menu bar Skip to main content


Workshop

AI for Mechanism Design and Strategic Decision Making (AIMS)

Xiaotie Deng · Jian Xu · Fabrizio Silvestri · Alireza Fallah · Yurong Chen · Brian Zhang · Haoran Sun
Apr 26, 5:00 AM - 1:00 PM

The rapid advancement of artificial intelligence, particularly in machine learning and foundation models, is creating a new synergy with the classical fields of Mechanism Design (MD) and Strategic Decision Making (SDM). This workshop aims to catalyze interdisciplinary research at this intersection, exploring how modern AI methods can redefine, extend, and automate core problems in MD and SDM. We will bring together researchers from machine learning, economics, and computer science to investigate this symbiotic relationship, focusing on topics such as novel AI applications for MD \& SDM and theoretical models for AI-driven methods. This workshop will highlight not only cutting-edge research but also impactful real-world applications and case studies from industry. Overall, our goal is to provide a premier platform for disseminating novel ideas, fostering collaboration across communities, and charting the future of intelligent economic systems.

Show more
View full details
Workshop

AI for Peace

Noa Garcia · Leonardo Impett · Yannis Kalantidis · Matt Mahmoudi · Evangelos Kazakos · Sonia Fereidooni
Apr 26, 5:00 AM - 1:00 PM

In this workshop, we aim to address the critically under-discussed issue of AI's dual-use nature, focusing on how machine learning technologies are being adapted for military purposes, potentially without the researchers' knowledge or consent. While attending to the heightened risks associated with particular areas and systems of research, we will also be collectively thinking through what it looks like to engage productively in research and development activities that considers ethics and international law at its core.

Show more
View full details
Workshop

ICLR 2026 Workshop on AI with Recursive Self-Improvement

Mingchen Zhuge · AILING ZENG · Deyao Zhu · Xidong Feng · Sherry Yang · Vikas Chandra · Jürgen Schmidhuber
Apr 26, 5:00 AM - 1:00 PM

Recursive self-improvement (RSI) is moving from thought experiments to deployed AI systems. LLM agents now rewrite their own codebases or prompts, scientific discovery pipelines schedule continual fine-tuning, and robotics stacks patch controllers from streaming telemetry, even improving product-level code. The ICLR 2026 Workshop on AI with Recursive Self-Improvement brings together researchers to discuss a simple question with big consequences: how do we build the algorithmic foundations for powerful and reliable self-improving AI systems? As loops that update weights, rewrite prompts, or adapt controllers move from labs into production, we will surface the methods that work — how to design, evaluate, and govern these loops without hand-waving. This workshop examines algorithms for self-improvement across experience learning, synthetic data pipelines, multimodal agentic systems, weak-to-strong generalization, and inference-time scaling, and will discuss and refine methods for recursive self-improvement. In short, we care about loops that actually get better — and can show it. To give the workshop a clear spine, we organize contributions around five lenses: change targets inside the system, temporal regime of adaptation, mechanisms and drivers, operating contexts, and evidence of improvement. This framing synthesizes recent perspectives on self-evolving agents while grounding them in practical, auditable deployment settings. We are paradigm-agnostic: we welcome work on foundation models, agent frameworks, robots, learning algorithms and optimizers, control and program synthesis, as well as data and infrastructure systems and evaluation tooling that enable recursive self-improvement.

Show more
View full details
Workshop

Lifelong Agents: Learning, Aligning, Evolving

Cheng Qian · Emre Can Acikgoz · Hongru WANG · Zhenfei Yin · Manling Li · Yun-Nung Chen · Mengdi Wang · Caiming Xiong
Apr 26, 5:00 AM - 1:00 PM

Artificial intelligence has reached a pivotal stage: while current agentic systems excel in static benchmarks, they struggle to adapt to dynamic, real-world environments. This workshop introduces the concept of lifelong agents, AI systems that continuously learn, align, and evolve across their operational lifespan. Such agents must integrate continual learning, long-term alignment with human values, and self-improvement under resource constraints to remain robust, trustworthy, and sustainable. By uniting research from reinforcement learning, large language models, alignment, embodied AI and more, the workshop seeks to establish shared principles, frameworks, and evaluation methods for creating AI that grows intelligently and responsibly over time.

Show more
View full details
Workshop

Agents in the Wild: Safety, Security, and Beyond

Dawn Song · Chenguang Wang · Nicholas Crispino · Ruoxi Jia · Kyle Montgomery · Yujin Potter · Vincent Siu · Zhun Wang
Apr 26, 5:00 AM - 1:00 PM

AI agents are rapidly being deployed in critical real-world applications, yet their unique safety and security challenges remain underexplored. Unlike standard safety or security settings, agents act autonomously and make irreversible real-world decisions. This creates novel vulnerabilities and fundamental safety challenges for agents in real-world deployments. Our workshop provides the first dedicated venue for addressing the safety, security, and trustworthiness of agents in the wild. We bring together interdisciplinary researchers and practitioners to establish foundational theories and methods for safe agent deployment, identify critical open problems, and chart research directions for trustworthy agentic AI systems.

Show more
View full details
Workshop

Post-AGI Science and Society Workshop

Donato Crisostomi · Andrea Santilli · Pratyusha Sharma · Valentina Pyatkin · Zorah Lähner · Emanuele Rodolà
Apr 26, 5:00 AM - 1:00 PM

Artificial General Intelligence (AGI) has long seemed distant, but rapid advances in large-scale learning, autonomous reasoning, and open-ended discovery make its emergence increasingly plausible. The Post-AGI Science and Society Workshop asks what comes next. If AGI becomes ubiquitous, reliable, and affordable, how will it reshape scientific inquiry, the economy of knowledge, and human society? Will humans remain central to discovery or become curators and interpreters of machine-generated insights? The workshop brings together researchers from machine learning, philosophy of science, and policy to explore human-AI scientific coexistence. Topics include automated hypothesis generation, causal reasoning in AGI, collaborative discovery, epistemic alignment between humans and machines, and socio-economic shifts driven by pervasive intelligence. Through keynotes, talks, and a panel, we will examine how science and our understanding of knowledge might evolve in a post-AGI world.

Show more
View full details
Workshop

ICLR 2026 Workshop on Multimodal Intelligence: Next Token Prediction and Beyond

Ivona Najdenkoska · Mohammad Mahdi Derakhshani · Marzieh Fadaee · Kai Han · Saining Xie · Yuki Asano · Cees G Snoek
Apr 26, 5:00 AM - 1:00 PM

Foundation models have transformed multimodal intelligence, enabling open-ended reasoning, dialogue, and generation across vision, language, and audio. A growing body of work now frames this progress under the unifying paradigm of next-X prediction, where X may denote tokens, frames, or scales across discrete or continuous spaces. Discrete autoregressive models, such as Chameleon, extend next-token prediction beyond text, while continuous formulations like VAR, MAR, TransFusion, BAGEL, and Fluid capture next-frame or next-scale dynamics in latent space. Meanwhile, predictive encoders—exemplified by V-JEPA 2—eschew token emission to forecast future representations, focusing on salient, structured aspects of perception and behavior. Complementary to both, discrete diffusion models such as Diffusion-LM, LLaDA, and LaViDa redefine generation as iterative denoising, offering parallelism and improved global consistency. This workshop provides a timely venue to connect these emerging paradigms—next-token generation, predictive encoding, and diffusion-based modeling—and to explore how they can be integrated into unified multimodal systems. Key questions include: Which learning paradigm scales most effectively? How do they differ in representation quality, efficiency, and controllability? And can hybrid models combine their strengths? By bringing together researchers from these diverse communities, the workshop aims to chart a coherent roadmap for the next generation of multimodal foundation models—beyond token prediction alone.

Show more
View full details
Workshop

Workshop on Logical Reasoning of Large Language Models

Haoxuan Li · Arman Cohan · Michael Witbrock · Mengyue Yang · Fenrong Liu · Zhouchen Lin · Johan van Benthem · Peter Clark
Apr 26, 5:00 AM - 1:00 PM

Large language models (LLMs) have achieved remarkable breakthroughs in natural language understanding and generation, but their logical reasoning capabilities remain a significant bottleneck. Logical reasoning is crucial for tasks requiring precise deduction, induction, or abduction, such as medical diagnosis, legal reasoning, and scientific hypothesis verification. However, LLMs often fail to handle complex logical problems with multiple premises and constraints, and they frequently produce self-contradictory responses across different questions. These limitations not only restrict the reliability of LLMs in complex problem-solving but also hinder their real-world applications. In response to these emerging needs, we propose the workshop on Logical Reasoning of LLMs. This workshop will explore the challenges and opportunities for improving deduction, induction, and abduction capabilities of LLMs, implementing symbolic representation and reasoning via LLMs, avoiding logical contradictions across responses to multiple related questions, enhancing LLM reasoning by leveraging external logical solvers, and benchmarking LLM logical reasoning and consistencies. As LLMs continue to expand their role in AI research and applications, this workshop will serve as a platform to discuss and refine the methods for advancing logical reasoning within LLMs.

Show more
View full details
Workshop

From Human Cognition to AI Reasoning: Models, Methods, and Applications

Julie Shah · Sarath Sreedharan · Silvia Tulli · Pulkit Verma
Apr 26, 5:00 AM - 1:00 PM

The workshop will explore how explicit models of human knowledge, cognitive capabilities, and mental states can be integrated into AI reasoning processes. We will examine approaches that combine neural and symbolic methods inspired by human cognition, incorporate human causal reasoning patterns, and leverage human teaching signals to create more interpretable and aligned AI systems. More details at https://bit.ly/hcair26

Show more
View full details
Workshop

Scientific Methods for Understanding Deep Learning (Sci4DL)

Zahra Kadkhodaie · Florentin Guth · Sanae Lotfi · Davis Brown · Antonio Sclocchi · Sharvaree Vadgama · James Simon · Eero Simoncelli
Apr 26, 5:00 AM - 1:00 PM

While deep learning continues to achieve impressive results on an ever-growing range of tasks, our understanding of the principles underlying these successes remains largely limited. This problem is usually tackled from a mathematical point of view, aiming to prove rigorous theorems about optimization or generalization errors of standard algorithms, but so far they have been limited to overly-simplified settings. The main goal of this workshop is to promote a complementary approach that is centered on the use of the scientific method, which forms hypotheses and designs controlled experiments to test them. More specifically, it focuses on empirical analyses of deep networks that can validate or falsify existing theories and assumptions, or answer questions about the success or failure of these models. This approach has been largely underexplored, but has great potential to further our understanding of deep learning and to lead to significant progress in both theory and practice. The secondary goal of this workshop is to build a community of researchers, currently scattered in several subfields, around the common goal of understanding deep learning through a scientific lens.

Show more
View full details
Workshop

1st ICLR Workshop on Time Series in the Age of Large Models

Arjun Ashok · Abdul Fatir Ansari · Elizabeth Fons · Xiyuan Zhang · Chenghao Liu · Mononito Goswami · Xinyu Li · Yichen Zhou
Apr 26, 5:00 AM - 1:00 PM

Summary: This workshop will delve into aspects of time series prediction and analysis in the age of large models. This workshop builds upon our successful track record of fostering community engagement around large models for time series. Our inaugural NeurIPS 2024 workshop demonstrated strong community interest, attracting 99 submissions and over 500 participants (~1000 registered interest via Whova). Submissions spanned the full spectrum of the field—from building time series foundation models and leveraging pre-trained models from other modalities, to real-world applications and deployment experiences. The rich discussions at NeurIPS 2024 revealed both significant opportunities and fundamental limitations in current approaches, directly informing the research questions we aim to address in this iteration. Building on this momentum, we also organized the successful ICML 2025 Workshop on Foundation Models for Structured Data, which broadened our perspective by connecting time series researchers with the tabular data community. Focus and Innovation: For ICLR 2026, we are strategically refocusing to dive deeper into outstanding research questions that emerged from our previous workshops - particularly around agents, interpretability, and context-informed predictions. This iteration features an evolved organizing team and fresh speaker lineup, reflecting the field's rapid development. The nascent nature of large time series models makes this workshop particularly timely for ICLR 2026, as the community continues to establish foundational principles and explore novel applications in this emerging domain. Organizer Expertise: The organizers bring extensive research experience and proven leadership in the time series foundation models domain, with diverse backgrounds from industry and academia. Collectively, we have led advances on 3 key dimensions: foundational model development– creating some of the first time series foundation models including Lag-Llama, Chronos, Moment, Moirai, and TimesFM, advanced applications– establishing initial frameworks for reasoning and agents in time series through MLZero and TimeSeriesGym, and rigorous evaluation and benchmarking using tools such as Context-is-Key, GIFT-Eval, TimeSeriesExam, and fev-bench. Beyond research contributions, our team has demonstrated success in organizing impactful workshops at premier venues, including the NeurIPS 2024 workshop on Time Series in the Age of Large Models, AAAI’24 Spring Symposium on Clinical Foundation Models, ICAIF’24 Foundation Models for Time Series: Exploring New Frontiers, and ICML’25 Workshop on Foundation Models for Structured Data. This combination of deep technical expertise and proven workshop leadership positions us to facilitate meaningful discussions and foster collaboration in this rapidly evolving field.

Show more
View full details
Workshop

AI4MAT-ICLR-2026: ICLR 2026 Workshop on AI for Accelerated Materials Design

Santiago Miret · Defne Circi · N. M. Anoop Krishnan · Emily Jin · Mohamad Moosavi · Stefano Martiniani
Apr 26, 5:00 AM - 1:00 PM

AI4Mat-ICLR-2026 explores the automated discovery of advanced materials through three interconnected pillars: 1. AI-Guided Design; 2. Automated Synthesis; 3. Automated Characterization. By bringing together leading researchers at the intersection of machine learning and materials science, the workshop fosters discussion of cutting-edge advances while building a cohesive, multidisciplinary community tackling some of the field's most pressing challenges. To that end AI4Mat-ICLR-2026's program highlights two leading topics to foster scientific dialogue in relevant subject areas, each featuring carefully curated invited speakers: 1. Reinforcement Learning & Beyond: The Role of Feedback in AI for Materials Science; 2. Cross-Modal, Unified Materials Representations – From Structure to Properties to Performance. In addition to invited talks and technical discussions, AI4Mat-ICLR-2026 continues its commitment to community development through established initiatives, including a Tiny Papers track for early-stage work, travel grants to support broad and inclusive researcher participation, and a dedicated journal venue for high-quality submissions.

Show more
View full details
Workshop

AI&PDE: ICLR 2026 Workshop on AI and Partial Differential Equations

Eduardo Soares · Daniel Yukimura · Nara Bobko · Arthur Bizzi · Siddhartha Mishra · Elisa Serioli · Ana Muller
Apr 26, 5:00 AM - 1:00 PM

Partial Differential Equations (PDEs) are foundational to modeling complex phenomena across the natural sciences and engineering, from fluid dynamics and quantum systems to climate modeling and materials science. Despite their ubiquity, solving PDEs remains computationally intensive, especially in high-dimensional, multi-physics, and uncertain regimes. Recent advances in machine learning—such as neural operators, physics-informed networks, and foundation models—offer transformative potential to accelerate and generalize PDE solutions. However, realizing this promise requires addressing critical challenges in representation, stability, generalization, and benchmarking. The AI\&PDE-ICLR-2026 workshop will convene researchers from machine learning, applied mathematics, physics, and engineering to explore the future of AI-driven PDE modeling. We aim to (1) define the roadmap toward foundation models for PDEs, (2) investigate next-generation representations and architectures, and (3) foster a globally inclusive community. The program will feature invited talks, contributed papers, and themed tracks, including a full papers track for mature research and a tiny papers track for emerging ideas. By bridging disciplines and promoting open benchmarks and datasets, AI&PDE-ICLR-2026 will catalyze progress toward scalable, general-purpose AI solvers for PDEs.

Show more
View full details
Workshop

Geometry-grounded Representation Learning and Generative Modeling

Alison Pouplin · Sharvaree Vadgama · Sékou-Oumar Kaba · Manuel Lecha · Jakub Tomczak · Robin Walters · Stefanie Jegelka · Erik Bekkers
Apr 26, 5:00 AM - 1:00 PM

Real-world data often originates from physical systems that are governed by geometric and physical laws. Yet, most machine learning methods treat this data as abstract vectors, ignoring the underlying structure that could improve both performance and interpretability. Geometry provides powerful guiding principles, from group equivariance to non-Euclidean metrics, that can preserve the symmetries or the structure inherent in data. We believe those geometric tools are well-suited, and perhaps essential, for representation learning and generative modeling. We propose GRaM, a workshop centered on the principle of grounding in geometry, which we define as: An approach is geometrically grounded if it respects the geometric structure of the problem domain and supports geometric reasoning. This year, we aim to explore the relevance of geometric methods, particularly in the context of large models, focusing on the theme of scale and simplicity. We seek to understand when geometric grounding remains necessary, how to effectively scale geometric approaches, and when geometric constraints can be relaxed in favor of simpler alternatives.

Show more
View full details
Workshop

Algorithmic Fairness Across Alignment Procedures and Agentic Systems

Zeyu Tang · Prakhar Ganesh · Awa Dieng · Miriam Rateike · Jamelle Watson-Daniels · Golnoosh Farnadi · Jessica Schrouff · Sanmi Koyejo
Apr 26, 5:00 AM - 1:00 PM

AI has transitioned from predictive models to interactive, autonomous agents capable of reasoning, planning, and executing complex goals. As the systems increasingly influence social, economic, and scientific decisions, they determine whose interests are represented and whose opportunities are constrained. Ensuring fairness, therefore, is no longer an ethical preference but a practical imperative. As the fairness challenges are fundamentally transformed by advanced AI systems, traditional algorithmic fairness frameworks developed primarily for prediction and/or prediction-based decision-making no longer suffice. This workshop, Algorithmic Fairness Across Alignment Procedures and Agentic Systems (AFAA), emerges at this pivotal moment as a timely forum for rethinking fairness in AI alignment processes and agentic system development. By examining fairness across alignment procedures and agentic systems, this workshop creates a crucial platform for bridging the gap between rapid technical advances in model capabilities and the equally important advances needed in frameworks of algorithmic fairness to govern these powerful systems.

Show more
View full details
Workshop

Unifying Concept Representation Learning

Amit Dhurandhar · Amir-Hossein Karimi · Sara Magliacane · Stefano Teso · Efthymia Tsamoura · Zhe Zeng
Apr 26, 5:00 AM - 1:00 PM

Several areas at the forefront of AI research are currently witnessing a convergence of interests around the problem of learning high-quality concepts from data. Concepts have become a central topic of study in neuro-symbolic integration (NeSy). NeSy approaches integrate perception – usually implemented by a neural backbone – and symbolic reasoning by employing concepts to glue together these two steps: the latter relies on the concepts detected by the former to produce suitable outputs [1–5]. Concepts are also used in Explainable AI (XAI) by recent post-hoc explainers [6–9] and self-explainable architectures [10–13] as a building block for constructing high-level justifications of model behavior. Compared to, e.g., saliency maps, these can portray a more abstract and understandable picture of the machine’s reasoning process, potentially improving understandability, interactivity, and trustworthiness [14–17], to the point that concepts have been called the lingua franca of human-AI interaction [18]. Both areas hinge on learned concepts being “high-quality”. Causal Representation Learning (CRL) aims to identify latent causal variables and causal relations from high-dimensional observations, e.g., images or text, with theoretical guarantees [25]. As such, CRL is a generalization of disentangled representation learning, when the latent variables are dependent on each other, e.g., due to causal relations. CRL has been increasingly popular, with a plethora of methods and theoretical results [26–36]. The potential of leveraging CRL to learn more robust and leak-proof concept is an emerging area of research with a growing number of approaches [24, 37–40], but many open questions remain. In particular, what properties high-quality concepts should satisfy is unclear, and – despite studying the same underlying object – research in these areas is proceeding on mostly independent tracks, with minimal knowledge transfer. Efforts at adapting ideas and techniques are limited at best, meaning that approaches in one area completely ignore insights from the others. As a result, the central issue of how to properly learn and evaluate concepts is largely unanswered. This workshop brings together researchers from NeSy, XAI and CRL and from both industry and academia, who are interested in learning robust, semantically meaningful concepts. By facilitating informal discussion between experts and newcomers alike, it aims to tie together these currently independent strands of research and promote cross-fertilization.

Show more
View full details
Workshop

New Frontiers in Associative Memories

Krishna Balasubramanian · Rogerio Feris · Benjamin Hoover · Julia Kempe · Hilde Kuehne · Zhaoyang Shi
Apr 26, 5:00 AM - 1:00 PM

The primary focus of this workshop is to strengthen the analytical foundations of associative memory while exploring its emerging role in the design of agentic AI systems. By bringing together researchers from optimization and deep learning, statistical physics, neuroscience and machine learning systems, the workshop aims to catalyze cross-disciplinary exchange, identify open problems, and foster collaboration toward advancing the theoretical and practical frontiers of associative memory. A central goal is to build a cohesive community at the intersection of these fields, one that unites rigorous mathematical foundations with scalable architectures and applications where associative memories serve as the core drivers of reasoning, adaptation, and intelligent behavior.

Show more
View full details
Workshop

VerifAI-2: The Second Workshop on AI Verification in the Wild

Celine Lee · Ameesh Shah · Theo X. Olausson · Sean Welleck · Armando Solar-Lezama · Tao Yu
Apr 26, 5:00 AM - 1:00 PM

This workshop series explores the intersection of scale-driven generative artificial intelligence (AI) and the correctness-focused principles of verification. In its first rendition at ICLR 2025, it focused in particular on how generative AI can address the scaling challenges faced by formal analysis tools such as theorem provers, satisfiability solvers, and execution monitoring. The special theme of VerifAI@ICLR'25 was thus Large Language Models (LLMs) for Code Generation, an undeniably active area of research across both industry and academia, which has benefited greatly from (and improved) formal analysis tools such as static analyzers. Now, in light of the recent emphasis on large-scale post-training through reinforcement learning (RL), we are excited to continue uniting the interests of industry and academia with a new special theme: Building verifiable tasks and environments for RL.

Show more
View full details
Workshop

3rd Workshop on Navigating and Addressing Data Problems For Foundation Models (DATA-FM)

Zheng Xu · Ruoxi Jia · Martin Jaggi · Mónica Ribero · Pratyush Maini · Jiachen (Tianhao) Wang · Luxi He · Yuzheng Hu
Apr 26, 5:00 AM - 1:00 PM

The past year has witnessed remarkable advances in foundation models (FMs): new post-training paradigms such as reinforcement learning with verifiable rewards (RLVR) that strengthen reasoning, increasingly multimodal and agentic systems, and renewed attention to benchmark design and evaluation. Each of these advances depends on distinct data innovations: verifiable reward signals and reasoning traces for RLVR; aligned cross-modal corpora and interaction logs for multimodality and agency; and leak-resistant, representative test sets for evaluation. Taken together, these dependencies underscore the continuing centrality of data as a design variable at the forefront of FM research. Meanwhile, longstanding challenges in data collection, curation, and synthesis remain unresolved, while concerns surrounding copyright, privacy, and fairness have only intensified. Building on the success of the first two DATA-FM workshops at ICLR 2024 and 2025, the third edition will revisit these persistent issues while highlighting emerging ones at the frontiers of post-training, multimodality, and evaluation. By convening researchers and practitioners across diverse research communities, DATA-FM seeks to advance understanding of data’s evolving role in FMs and foster innovative solutions shaping the next generation of models and applications.

Show more
View full details
Workshop

Catch, Adapt, and Operate: Monitoring ML Models Under Drift

Sepidehsadat Hosseini · Motasem Alfarra · Chung-Chi Chen · Elham Dolatabadi · Bo Li · Murat Sensoy · Dequan Wang · Teresa Yeo
Apr 26, 5:00 AM - 1:00 PM

Machine learning systems are increasingly deployed in high-stakes domains such as healthcare, finance, robotics, and autonomous systems, where data distributions evolve continuously. Without robust monitoring and timely adaptation, even high-performing models can degrade silently, compromising reliability, safety, and fairness. Continuous monitoring is therefore an absolute necessity. While there has been rapid progress in drift detection, test-time and continual adaptation, and the deployment of ML systems at scale, these topics are often studied separately. The Catch, Adapt, and Operate workshop brings them together around three themes: sensing drift through statistical and representation-based monitoring, responding through adaptive and self-supervised updates, and operating at scale in production pipelines. By connecting theory, systems, and real-world practice, the workshop aims to build a shared foundation for reliable, fair, and continuously adaptive machine learning under real-world drift.

Show more
View full details
Workshop

Latent & Implicit Thinking – Going Beyond CoT Reasoning

Xinyi Wang · Nikunj Saunshi · Rui-Jie Zhu · Liu Yang · Yuntian Deng · Nishanth Dikkala · JIAHENG LIU · Zhiyuan Li
Apr 27, 5:00 AM - 1:00 PM

Recent advances in AI have revealed that explicit Chain-of-Thought (CoT) reasoning—where models verbalize intermediate reasoning steps—while powerful, is not the only or most efficient form of reasoning. The emerging paradigm of latent and implicit thinking explores how models can reason within their hidden representations or parameter space, using continuous latent states, recurrent or looped architectures, and non-autoregressive formulations such as diffusion or search-based models. This workshop, Latent & Implicit Thinking: Going Beyond CoT Reasoning (LIT), aims to unify these growing research efforts across difference areas. It will feature discussions on latent-space reasoning tokens, looped and recurrent architectures, latent generative paradigms, and theoretical insights on the nature of latent reasoning depth and efficiency. By bringing together experts from academia and industry, LIT will provide a forum for deep technical exchange and cross-disciplinary collaboration, fostering a new shared framework for understanding and enhancing reasoning in the latent space of neural networks.

Show more
View full details
Workshop

Agentic AI in the Wild: From Hallucinations to Reliable Autonomy

Grigorios Chrysos · Yixuan Li · Etsuko Ishii · Xuefeng Du · Katia Sycara
Apr 27, 5:00 AM - 1:00 PM

When we delegate tasks to AI agents—can we count on them to get it right? Agentic AI systems are increasingly stepping beyond static generation tasks into autonomous decision-making: scheduling meetings, booking travel, managing workflows, and assisting in scientific research. In these contexts, reliability is not just important—it is essential. Yet today’s foundation models remain prone to a critical failure mode: hallucination, where outputs are factually incorrect, semantically implausible, or detached from reality. While hallucinations are concerning in any generative system, these challenges are amplified in agentic settings, where models execute sequences of decisions without continuous human oversight.

Show more
View full details
Workshop

2nd Workshop on World Models: Understanding, Modelling and Scaling

Mengyue Yang · Xidong Feng · Nick Hansen · Francesco Faccio · Dima Damen
Apr 27, 5:00 AM - 1:00 PM

The second ICLR Workshop on World Models explores scalable frameworks that unify generative modeling, sequential decision-making, multimodal learning, and causal reasoning. As world models mature from conceptual prototypes into system-level infrastructures for intelligence, this edition focuses on three core themes: (i) understanding and knowledge extraction of the world, (ii) large-scale training and rigorous evaluation, and (iii) cross-modal and control-centric scaling across language, vision, and action. Building on the success of the 2025 inaugural workshop with over 1,500 participants, the 2026 edition introduces systems-level discussions, robotics case studies, and failure-mode post-mortems emphasizing reproducibility, safety, and robustness. The workshop will culminate in a synthesis article summarizing insights from both editions—tracing the evolution of world model research, consolidating key lessons, and outlining future directions toward scalable, grounded, and causally coherent intelligence.

Show more
View full details
Workshop

ICLR 2026 Workshop on Memory for LLM-Based Agentic Systems (MemAgents)

Zhenguang Cai · Wenyue Hua · Keshuang Li · Yunpu Ma · Ercong Nie · Hinrich Schuetze · Karolina Stanczak · Matthew E Taylor
Apr 27, 5:00 AM - 1:00 PM

Agentic systems are already being deployed in high-stakes settings such as robotics, autonomous web interaction, and software maintenance, and their capabilities ultimately hinge on memory. While LLM memorization typically refers to static, in-weights retention of training data or recent context, agent memory is online, interaction-driven, and under the agent’s control. Agentic systems must operate over extended horizons, learn from interaction, and adapt as goals and contexts shift. The limiting factor is increasingly not raw model capability but memory: how agents encode, retain, retrieve, and consolidate experience into useful knowledge for future decisions. Consistent with this view, recent commentary has argued that reinforcement learning can finally generalize when supplied with strong priors and explicit reasoning; however, current evaluations often underplay sequential accumulation of experience, where memory becomes decisive. In this context, we propose a workshop devoted to the memory layer for LLM-based agentic systems. Our premise is that long-lived, safe, and useful agents require a princi- pled memory substrate that supports single-shot learning of instances, context-aware retrieval, and consolidation into generalizable knowledge. This workshop aims to advance the design of the memory layer for agentic systems and to convene interdisciplinary researchers across reinforcement learning, memory research, large language models, agentic systems, and neuroscience, with an organizing team that spans these communities.

Show more
View full details
Workshop

Learning Meaningful Representations of Life (LMRL) Workshop @ ICLR 2026

Kristina Ulicna · Rebecca Boiarsky · Till Richter · Soo-Jeong Kim · Lazar Atanackovic · Jason Hartford · Romain Lopez · Thouis Jones
Apr 27, 5:00 AM - 1:00 PM

Learning Meaningful Representation Learning (LMRL) Workshop 2026 aims to identify the key bottlenecks in the development of virtual cells. Virtual cells are an in silico representation of a cell’s behaviour and dynamics in both health and disease, with immense implications for research, diagnostics and therapeutic development. Building towards such a system begins with learning meaningful representations within individual modalities, which form the foundations for scaling the complex heterogeneous biological signals into a coherent model of a cell, and combining them into integrative models that capture biology’s complexity. LMRL 2026 highlights emerging directions for overcoming these challenges by focusing on four core ingredients - causality in biological systems, generative modelling, interpretable representations, and leveraging virtual cells for real-world impact. This workshop aims to catalyse the advances in how we learn meaningful representations by bringing together the AIxBio community around a shared scientific roadmap.

Show more
View full details
Workshop

The 2nd Workshop on Foundation Models for Science: Real-World Impact and Science-First Design

Wuyang Chen · Yongji Wang · N. Benjamin Erichson · Laurence Perreault-Levasseur · Bo Li · Damian Borth · Swarat Chaudhuri
Apr 27, 5:00 AM - 1:00 PM

Scientific foundation models should be built for science, not for generic AI tastes or leaderboard prestige. This workshop centers problem-driven design: models that measurably advance real scientific inquiries, e.g., forecasting extreme climate events, accelerating materials discovery, understanding biological mechanisms, co-developed with domain experts and validated against field data, experiments, and downstream impact. We argue that foundation models for science must be built differently from language and vision. Scientific data are physical, causal, spatiotemporal, and often scarce or biased; objectives must reflect mechanistic fidelity, not just predictive accuracy. This calls for scientific priors and constraints, robust uncertainty quantification (UQ), and architectures that natively handle multi-modality (e.g., grids, meshes, spectra, time series, point clouds, text, images, code). It also demands tight integration with classical scientific tools (simulators, PDE solvers, optimization and inference engines, and HPC workflows) to yield hybrid systems that are faster, more accurate, and more trustworthy. We will highlight opportunities and hard problems unique to science: enforcing conservation laws and symmetries; learning across vast spatial and temporal scales; representing extreme events and tipping points; calibrating and validating UQ; and developing evaluation protocols that reward mechanistic insight and actionable reliability. The goal is a roadmap for building, training, and deploying scientific foundation models that accelerate discovery while respecting the structure of the natural world.

Show more
View full details
Workshop

Representational Alignment (Re$^4$-Align)

Badr AlKhamissi · Brian Cheung · Dota Tianai Dong · Stephanie Fu · Erin Grant · Kushin Mukherjee · Ilia Sucholutsky · SIDDHARTH SURESH
Apr 27, 5:00 AM - 1:00 PM

Representational alignment among artificial and biological neural systems continues to be a rapidly growing research area across machine learning, neuroscience, and cognitive science communities; we counted 688 papers submitted to ICLR 2026 on this set of interdisciplinary topics, up from 443 papers submitted to ICLR 2025, and 303 to ICLR 2024, representing an average 51% yearly increase. The Re-Align Workshop at ICLR 2026 facilitates interdisciplinary discussion among these communities, highlights unexpected findings from last year’s hackathon, and pushes beyond the foundational questions of alignment addressed in the previous workshops to focus on two novel and critical interdisciplinary applications of representational alignment: enabling neural control via representational alignment and evaluating the downstream behaviors enabled by representational alignment.

Show more
View full details
Workshop

Workshop on Multi-Agent Learning and Its Opportunities in the Era of Generative AI

Jianhong Wang · Caroline Wang · Feng Chen · Arrasy Rahman · Felipe Leno da Silva · Rupali Bhati · Bo Liu · Mustafa Mert Çelikok
Apr 27, 5:00 AM - 1:00 PM

The rapid emergence of generative AI has revitalized interest in multi-agent learning as a foundation for building systems that can reason, coordinate, and adapt across diverse environments. This workshop seeks to explore the growing convergence between multi-agent learning and generative AI, emphasizing their mutual potential to advance both theoretical understanding and practical capability. We focus on three interrelated fronts where this integration is most visible: (1) LLM-based multi-agent systems, where large language models interact, cooperate, or compete in structured settings; (2) real-world distributed system control, where multi-agent learning offers scalable and data-driven coordination strategies for complex real-world systems such as smart cities; and (3) human-AI interaction, where generative AI enables richer modelling of human preferences, values, and behaviours, supporting more human-aligned multi-agent systems. By bringing together researchers from machine learning, game theory, cognitive science, and human-computer interaction, this workshop aims to bridge methodological insights and emerging applications, fostering a shared agenda for the age of multi-agent generative AI systems.

Show more
View full details
Workshop

Integrating Generative and Experimental Platforms for Biomolecular Design

Chenghao Liu · Jarrid Rector-Brooks · Soojung Yang · Sidney Lisanza · Jacob Gershon · Lauren Hong · Pranam Chatterjee · Yoshua Bengio
Apr 27, 5:00 AM - 1:00 PM

Biomolecular design, through artificial engineering of proteins, ligands, nucleic acids, and cells, holds immense promise in addressing pressing medical, industrial, and environmental challenges. While generative machine learning has shown significant potential in this area, a disconnect exists with experimental biology: many ML research efforts prioritize static benchmark performance, potentially sidelining impactful biological applications. This workshop seeks to bridge this gap by bringing computationalists and experimentalists together, catalyzing a deeper interdisciplinary discourse. Together, we will explore the strengths and challenges of generative ML in biology, experimental integration of generative ML, and biological problems ready for ML. To attract high-quality and diverse research, we partnered with Nature Biotechnology for a special collection, and we created dedicated tracks for in-silico ML research and hybrid ML-experimental biology research. Our lineup features emerging leaders as speakers and renowned scientists as panelists, encapsulating a spectrum from high-throughput experimentation and computational biology to generative ML. To catalyze new collaborations, we will host a seed-grant competition for pairs of experimentalists and computationalists proposing fresh joint projects. To connect dry and wet lab practice, a wet-lab challenge sponsored by Adaptyv Bio will empirically evaluate protein design models. With a diverse organizing team and backed by industry sponsors, we dedicate the workshop to pushing the boundaries of ML's role in biology. This will be the third edition of this workshop following the previous versions of it we organized at ICLR 2024 and 2025.

Show more
View full details
Workshop

I Can't Believe It's Not Better: Where Large Language Models need to improve

Arno Blaas · Priya DCosta · Fan Feng · Zhaoying Pan · Nikolai Rozanov · Jennifer Williams · Yubin Xie · Rui Yang
Apr 27, 5:00 AM - 1:00 PM

Large language models (LLMs) have advanced rapidly, yet these advances have also highlighted gaps, such as hallucination, brittle reasoning, alignment failures, and hard efficiency/scaling constraints, especially in safety-critical settings. Ideally, evidence of such limitations would immediately lead to improvements to address these gaps, but compute constraints and unfruitful approaches often stall iteration; meanwhile, publication norms still prioritize positive results over informative null or negative findings. This workshop creates a venue for negative results on LLMs including: (i) rigorous studies that demonstrate and analyze limitations (e.g., leak-resistant reasoning probes, alignment stress tests, failure audits in critical applications), and (ii) attempts on well-established ideas that did not deliver expected gains, with analyses that identify failure modes, boundary conditions, and lessons learned. We welcome diagnostics, replications, counterfactual evaluations, and ablations that separate genuine capability from shortcut learning and clarify when methods break, why they break, and how to fix them. By aggregating evidence of negative results and actionable takeaways, the workshop aims to convert setbacks into robust principles and practices for building more reliable LLMs.

Show more
View full details
Workshop

Machine Learning for Genomics Explorations (MLGenX)

Ehsan Hajiramezanali · Wei Qiu · Arman Hasanzadeh · Tommaso Biancalani · Mihaela van der Schaar · Fabian Theis · Aviv Regev
Apr 27, 5:00 AM - 1:00 PM

Despite rapid advances in data-driven biology, our limited understanding of the biological mechanisms underlying diseases continues to hinder therapeutic innovation. While genomics and multi-omics platforms have generated vast datasets, translating these into actionable biological insights remains an open challenge. At the same time, the emergence of foundation models and AI agents capable of reasoning, planning, and hypothesis generation offers a unique opportunity to reimagine how we approach discovery in biology. The 3rd MLGenX workshop aims to bring together the machine learning, genomics, and biology communities to explore this new frontier. This year’s theme, “From Reasoning to Experimentation: Closing the Loop Between AI Agents and the Biological Lab,” focuses on adaptive, interpretable, and experiment-aware AI systems that learn from feedback and drive biological insight. By fostering interdisciplinary collaboration, benchmark sharing, and open discussion, MLGenX 2026 aims to chart the path toward lab-in-the-loop science and accelerate innovation in biology and drug discovery.

Show more
View full details
Workshop

Workshop on Scaling Post-training for LLMs (SPOT)

Devvrit Khatri · Rishabh Tiwari · Lovish Madaan · Sewon Min · Gagan Jain · Nan Rosemary Ke · Kurt Keutzer · Prateek Jain
Apr 27, 5:00 AM - 1:00 PM

Post-training, encompassing techniques like Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL), is no longer a mere final step for task-specific adaptation. It is evolving into a compute-intensive phase in its own right, crucial for unlocking the full potential of foundational models and optimizing for critical downstream behaviors. Yet, the science of post-training, at scale, remains in its infancy. This workshop is motivated by the urgent need to establish rigorous and scalable methodologies, design choices, and approaches for post-training. While today's design choices in pre-training are made with a core focus on their ability to scale, a similar scaling laws mindset for post-training is largely absent. Our goal is to catalyze a systematic understanding of how post-training scales—across algorithms, data regimes, infrastructure, and objectives—and to identify the open questions that must be addressed to turn post-training into a science of its own. This workshop aims to bring together diverse perspectives from academic and industrial researchers and practitioners, to share practical experiences, and to outline a clear research direction toward building a principled science of post-training at scale.

Show more
View full details
Workshop

The 2nd Workshop on Advances in Financial AI Workshop: Towards Agentic and Responsible Systems

Nazanin Mehrasa · Ioana Boier · CHANYEOL CHOI · Yongjae Lee · Salwa Alamir · Simon Lucey
Apr 27, 5:00 AM - 1:00 PM

The financial domain is undergoing rapid transformation driven by advances in artificial intelligence. Building on last year’s "Advances in Financial AI: Opportunities, Innovations, and Responsible AI" workshop, this second edition will focus particularly on the emergence of agentic systems in finance (autonomous or semi-autonomous agents, decision-making systems, multi-agent interactions), and the imperative of responsibility (ethics, fairness, accountability, transparency, robustness, regulation). This workshop aims to bring together researchers, practitioners, and policymakers to explore both the opportunities and risks of agentic financial AI systems, to share recent innovations, and to work towards foundations and best practices that ensure such systems are safe, trustworthy, and socially aligned.

Show more
View full details
Workshop

The 3rd Workshop on Test-Time Updates (TTU)

Evan Shelhamer · francesco croce · Teresa Yeo · Shuaicheng Niu · Behzad Bozorgtabar · Xiaoxiao Li
Apr 27, 5:00 AM - 1:00 PM

The common paradigm of deep learning distinguishes the training stage, where model parameters are learnt on massive datasets, and deployment, during which the frozen models are tested on unseen data. In case the test-time data distribution changes, or the model needs to satisfy new requirements, a new training round is needed. Test-time updates (TTU), including test-time adaptation (TTA), post-training editing, in-context learning, and online continual learning, offer a complementary path to re-training: adapt when and where data shift occurs. Test-time updates are relevant across model size: they can be used to edit the knowledge in large foundation models for which re-training has prohibitive costs, as well as to adapt models on edge devices. Moreover, test-time adaption finds applications on a variety of tasks, from vision to natural language tasks or time series analysis, each presenting its specific challenges and methods. Finally, the goals of test-time approaches are multiple, spanning robustness, customization, and computational efficiency. In this workshop we want to bring together these different facets of test-time updates, connecting researchers focusing on topics typically treated as independent problems. We believe that this will offer a unique opportunity for cross-area collaborations. Sharing domain-specific challenges and solutions will bridge diverse communities, providing beneficial contamination. In fact, we will welcome works on methods, theory, systems, and evaluations for TTU/TTA across modalities (vision, language, audio, etc.), scales (from edge to cloud), and openness (open/closed models, black-/white-box scenarios). We will highlight principled objectives, safe/robust updates, practical parameterizations (inputs, features, adapters, heads), and cost-aware/green practices that respect latency, energy, and monetary budgets.

Show more
View full details
Workshop

Deep Generative Model in Machine Learning: Theory, Principle and Efficacy (2nd Workshop)

Andi Han · Valentin De Bortoli · Mingyuan Bai · Sara Fridovich-Keil · Wei Huang · Taiji Suzuki · Qing Qu · Kenji Fukumizu
Apr 27, 5:00 AM - 1:00 PM

The 2nd Deep Generative Models in Machine Learning: Theories, Principles, and Efficacy (DeLTa 2026) workshop aims to bridge the gap between theory and practice in modern generative modeling. Deep Generative Models (DGMs)—including VAEs, GANs, flows, autoregressive, and diffusion models—have transformed AI research, yet fundamental theoretical and algorithmic challenges persist. DeLTa 2026 will bring together experts across statistics, optimization, and deep learning to address two central questions: (1) How can we develop unified theoretical frameworks to understand and design advanced generative models? and (2) How can we improve their efficiency, reliability, and transferability in real-world applications? This year’s workshop expands its scope to include emerging frontiers such as flow matching, stochastic control, discrete and low-dimensional diffusion models, post-training theory, and large language diffusion models. By fostering dialogue between theoretical and applied communities, DeLTa 2026 seeks to establish principled foundations that guide scalable, interpretable, and safe generative modeling. The workshop will feature invited talks, contributed papers, and a dedicated short-paper track to encourage participation from early-career and underrepresented researchers. Building on the success of DeLTa 2025, we anticipate over 400 participants and vibrant interdisciplinary engagement at ICLR 2026.

Show more
View full details
Workshop

ReALM-GEN: Real-World Constrained and Preference-Aligned Flow- and Diffusion-based Generative Models

Paris Giampouras · Morteza Mardani · Yingzhen Li · Giannis Daras · Johann Wenckstern · Charlotte Bunne
Apr 27, 5:00 AM - 1:00 PM

Diffusion and flow-based generative models power today’s breakthroughs in Generative AI, showing impressive results in generating various types of data ranging from images and video to protein molecules and text. However, making them \emph{respect real-world constraints} and \emph{align with users' preferences} at post-training phase or at inference time, is still an unsolved challenge. ReALM- GEN at ICLR 2026 will bring together a diverse community of researchers spanning theoretical foundations of ML and generative models, vision, language, robotics, and scientific applications of AI, to explore bold ideas and practical tools for {\it adapting and/or steering pretrained flow- and diffusion-based models} toward real-world constraint satisfaction and alignment with user preferences.

Show more
View full details
Workshop

4th ICLR Workshop on Machine Learning for Remote Sensing

Esther Rolf · Bianca Zadrozny · Hannah Kerner · Marc Rußwurm · Evan Shelhamer · Gabriel Tseng · Ronny Hänsch · Hamed Alemohammad
Apr 27, 5:00 AM - 1:00 PM

Machine Learning for Remote Sensing (ML4RS) has rapidly evolved into a vibrant research area. Remote sensing provides the ML community with an unparalleled source of multimodal, spatiotemporal data—challenging algorithms to learn from vast, heterogeneous, and dynamically changing observations of our planet. Building on the success of ML4RS workshops at ICLR 2023-2025, the 4th ICLR Workshop on Machine Learning for Remote Sensing will focus on bridging the persistent gap between publication and practice. Our theme, “ML4RS: From Publication to Practice,” aims to connect research innovations with their real-world deployment. This year’s workshop introduces two new elements: an interactive tutorials track and an opportunity for research track papers to be published in journal proceedings. Alongside invited provocations and debates on “Foundation Models in ML4RS: Are We There Yet?”, our program highlights contributions across key challenges in the field—including data efficiency, interpretability, benchmarking, and global versus local model design. Building on ML4RS’s tradition of highlighting speakers and challenges related to the ICLR host location, ML4RS 2026 emphasizes local engagement with Brazil’s dynamic remote sensing and ML communities while continuing to cultivate a diverse, international ecosystem of researchers, practitioners, and end-users. By bridging methodological innovation and practical application, ML4RS 2026 aims to advance the scientific and societal impact of machine learning for Earth observation.

Show more
View full details
Workshop

Generative AI in Genomics (Gen^2): Barriers and Frontiers

Pinar Demetci · Maria Skoularidou · Dongshunyi Li · Valentin De Bortoli · Tamara Broderick · Max Welling · Arnaud Doucet · Renzo Soatto
Apr 27, 5:00 AM - 1:00 PM

Generative AI (GenAI) is transforming biology, with breakthrough applications like directed evolution in protein science. The parallel ambition to engineer cellular and tissue states in genomics is now a major frontier, yet progress is hampered by domain-specific roadblocks. Our workshop is designed to bridge this gap between GenAI's promise and its practical applications towards this goal. With recent large-scale data initiatives launched to support GenAI models creating an inflection point for the field, timing is ideal. Through a field-grounding keynote by a genomics expert, invited talks by GenAI practitioners, contributed presentations, and a moderated debate, we will bring together experts and early-career scientists from machine learning and experimental genomics to collaboratively define a roadmap for progress. Our program will target core, interconnected challenges across the development pipeline: from data generation priorities and model design for genomic hierarchies to biologically-grounded evaluation frameworks and interpretability. By defining promising research directions and critical evaluations, our ultimate goal is to catalyze a new generation of models for tangible biological impact.

Show more
View full details
Workshop

Principled Design for Trustworthy AI: Interpretability, Robustness, and Safety Across Modalities

Tsui-Wei (Lily) Weng · Nghia Hoang · Tengfei Ma · Jake Snell · francesco croce · Chandan Singh · Subarna Tripathi · Lam Nguyen
Apr 27, 5:00 AM - 1:00 PM

Modern AI systems, particularly large language models, vision-language models, and deep vision networks, are increasingly deployed in high-stakes settings such as healthcare, autonomous driving, and legal decisions. Yet, their lack of transparency, fragility to distributional shifts between train/test environments, and representation misalignment in emerging tasks and data/feature modalities raise serious concerns about their trustworthiness. This workshop focuses on developing trustworthy AI systems by principled design: models that are interpretable, robust, and aligned across the full lifecycle – from training and evaluation to inference-time behavior and deployment. We aim to unify efforts across modalities (language, vision, audio, and time series) and across technical areas spanning interpretability, robustness, uncertainty, safety, and policy. Our goal is to create a workshop platform for cross-disciplinary discussion and idea exchange across key dimensions of trustworthiness in modern AI systems. These include interpretability & mechanistic transparency, uncertainty quantification & risk assessment for safe operation, adversarial & distributional robustness, and representation & safety alignment across diverse tasks & modalities. By bringing together these efforts under a cohesive design paradigm, the workshop seeks to advance forward-looking solutions and foster community building around shared technical & societal challenges in building trustworthy AI systems. This workshop differs from the recent prior workshop efforts (e.g ICML’24 TiFA, NeurIPS’24 Interpretable AI, IJCAI’24 Trustworthy AI) in its unique focus on building Trustworthy AI systems by design and its broad coverage of the full machine learning lifecycle across both single- and multi-modal settings. Topics of interest include 6 pillars: (1) Interpretable and Intervenable Models: concept bottlenecks and modular architectures, neuron tracing and causal influence methods, mechanistic interpretability and concept-based reasoning, interpretability for control and real-time intervention; (2) Inference-Time Safety and Monitoring: reasoning trace auditing in LLMs and VLMs, inference-time safeguards and safety mechanisms, chain-of-thought consistency and hallucination detection, real-time monitoring and failure intervention mechanisms; (3) Multimodal Trust Challenges: grounding failures and cross-modal misalignment, safety in vision-language and deep vision systems, cross-modal alignment and robust multimodal reasoning, trust and uncertainty in video, audio, and time-series models; (4) Robustness and Threat Models: adversarial attacks and defenses, robustness to distributional, conceptual, and cascading shifts, formal verification methods and safety guarantees, robustness under streaming, online, or low-resource conditions; (5) Trust Evaluation and Responsible Deployment: human-AI trust calibration, confidence estimation, and uncertainty quantification, metrics for interpretability, alignment, and robustness, transparent, reproducible, and accountable deployment pipelines, safety alignment in fine-tuning, instruction-tuning, and retrieval-augmented systems. (6) Safety and Trustworthiness in LLM Agents: Autonomous tool use and agentic behavior in LLMs, Safety and failures in planning and action execution, emergent behaviors in multi-agent interactions, intervention and control in agent loops, alignment of long-horizon goals with user intent, auditing and debugging LLM agents in real-world deployment.

Show more
View full details
Workshop

The First Workshop on Efficient Spatial Reasoning

Haozheng Luo · Yijiang Li · Zhenyu Pan · Ruiyang Qin · Weiyang Liu · Zhijian Liu · Manling Li · Nuno Vasconcelos
Apr 27, 5:00 AM - 1:00 PM

Spatial reasoning—the ability to understand, represent, and manipulate spatial relationships among objects, agents, and environments—has been profoundly advanced by large foundation models, enabling breakthroughs in 3D reconstruction, scene understanding, and vision–language reasoning. However, current models often rely on massive parameter scales or test-time extensions, introducing significant inefficiencies during training and inference. They also struggle with multi-step reasoning and the nuanced comprehension of complex spatial relations, where unreliable reasoning paths undermine both efficiency and accuracy. To address these challenges, we propose a workshop that unites researchers and practitioners from academia and industry to advance efficient spatial reasoning—approaches that improve generalization and robustness while remaining computationally practical. Topics include symbolic–neural integration, geometric deep learning, scalable reasoning architectures, and evaluation frameworks. Through invited talks and discussions, the workshop will examine efficiency–accuracy trade-offs, cross-modal reasoning, and real-world robustness, fostering collaboration across AI, cognitive science, and applied domains.

Show more
View full details