Sungjin Ahn · Wilka Carvalho · Klaus Greff · Tong He · Thomas Kipf · Francesco Locatello · Sindy Löwe

Discrete abstractions such as objects, concepts, and events are at the basis of our ability to perceive the world, relate the pieces in it, and reason about their causal structure. The research communities of object-centric representation learning and causal machine learning, have – largely independently – pursued a similar agenda of equipping machine learning models with more structured representations and reasoning capabilities. Despite their different languages, these communities have similar premises and overall pursue the same benefits. They operate under the assumption that, compared to a monolithic/black-box representation, a structured model will improve systematic generalization, robustness to distribution shifts, downstream learning efficiency, and interpretability. Both communities typically approach the problem from opposite directions. Work on causality often assumes a known (true) decomposition into causal factors and is focused on inferring and leveraging interactions between them. Object-centric representation learning, on the other hand, typically starts from an unstructured input and aims to infer a useful decomposition into meaningful factors, and has so far been less concerned with their interactions.This workshop aims to bring together researchers from object-centric and causal representation learning. To help integrate ideas from these areas, we invite perspectives from the other fields including cognitive psychology and neuroscience. We …

Minqi Jiang · Jack Parker-Holder · Michael D Dennis · Mikayel Samvelyan · Roberta Raileanu · Jakob Foerster · Edward Grefenstette · Tim Rocktaeschel

Open-ended learning processes that co-evolve agents and their environments resulted in human intelligence, but producing such a system, which generates endless, meaningful novelty, remains an open problem in AI research. We hope our workshop provides a forum both for bridging knowledge across a diverse set of relevant fields as well as sparking new insights that can enable agent learning in open-endedness.

David Adelani · Angela Fan · Jade Abbott · Perez Ogayo · Hady Elsahar · Salomey Osei · Mohamed Ahmed · Constantine Lignos · shamsuddeen muhammad

Africa has over 2000 languages and yet is one of the least represented in NLP research. The rise in ML community efforts on the African continent has led to a vibrant NLP community. This interest is manifesting in the form of national, regional, continental and even global collaborative efforts focused on African languages, African corpora, and tasks with importance to the African context. Starting in 2020, the AfricaNLP workshop has become a core event for the African NLP community. Many of the participants are active in the Masakhane grassroots NLP community members, allowing the community to convene, showcase and share experiences with each other. Many first-time authors, through the mentorship programme, found collaborators and published their first paper. Those mentorship relationships built trust and coherence within the community that continues to this day. We aim to continue this.Large scale collaborative works have been enabled by participants who joined from the AfricaNLP workshop such as MasakhaNER (61 authors), Quality assessment of Multilingual Datasets (51 authors), Corpora Building for Twi (25 authors), NLP for Ghanaian Languages (25 Authors).This workshop follows the previous successful edition in 2020 and 2021 co-located with ICLR and EACL respectively.

Miriam Redi · Yannis Kalantidis · Krishna Srinivasan · Yacine Jernite · Tiziano Piccardi · Diane Larlus · Stéphane Clinchant · Lucie-Aimée Kaffee

In the broader AI research community, Wikipedia data has been utilized as part of the training datasets for (multilingual) language models like BERT for many years. However, its content is still a largely untapped resource for vision and multimodal learning systems.Aside from a few recent cases, most vision and language efforts either work on narrow domains and small vocabularies and/or are available for English only, thus limiting the diversity of perspectives and audiences incorporated by these technologies. Recently, we see methods leveraging large data for multi-modal pretraining, and Wikipedia is one of the few open resources central to that effort.With this workshop, we propose to offer a space to bring together the community of vision, language and multilingual learning researchers, as well as members of the Wikimedia community, to discuss how these two groups can help and support each other. We will explore existing aspects and new frontiers of multilingual understanding of vision and language, focusing on the unique nature of Wikimedia’s mission: to bring free knowledge to the whole world equally.Beside invited talks and panel discussions, our workshop will present the winning entries of an ongoing Wikimedia-led, large-scale challenge on multilingual, multimodal image-text retrieval. Using the publicly available Wikipedia-based …

Alexander Cloninger · Manohar Kaul · Ira Ktena · Nina Miolane · Bastian Rieck · Guy Wolf

Over the past two decades, high-throughput data collection technologies have become commonplace in most fields of science and technology, and with them an ever-increasing amount of big high dimensional data is being generated by virtually every real-world system. While such data systems are highly diverse in nature, the underlying data analysis and exploration tasks give rise to common challenges at the core of modern representation learning. For example, even though modern real-world data typically exhibit high-dimensional ambient measurement spaces, they often exhibit low-dimensional intrinsic structures that can be uncovered by geometry-oriented methods, such as the ones encountered in manifold learning, graph signal processing, geometric deep learning, and topological data analysis. As a result, recent years have seen significant interest and progress in geometric and topological approaches to representation learning, thus enabling tractable exploratory analysis by domain experts who frequently do not have a strong computational background.Motivation. Despite increased interest in the aforementioned methods, there is no forum in which to present work in progress to get the feedback of the machine learning community. Knowing the diverse backgrounds of researchers visiting ICLR, we consider this venue to be the perfect opportunity to bring together domain experts, practitioners, and researchers that are …

Andrea Tacchetti · Ian Gemp · Elise van der Pol · Arash Mehrjou · Satpreet H Singh · Noah Golowich · Sarah Perrin · Nina Vesseron

Can we reformulate machine learning from the ground up with multiagent in mind? Modern machine learning primarily takes an optimization-first, single-agent approach, however, many of life’s intelligent systems are multiagent in nature across a range of scales and domains such as market economies, ant colonies, forest ecosystems, and decentralized energy grids.

Generative adversarial networks represent one of the most recent successful deviations from the dominant single-agent paradigm by formulating generative modeling as a two-player, zero-sum game. Similarly, a few recent methods formulating root node problems of machine learning and data science as games among interacting agents have gained recognition (PCA, NMF). Multiagent designs are typically distributed and decentralized which leads to robust and parallelizable learning algorithms.

We want to bring together a community of people that wants to revisit machine learning problems and reformulate them as solutions to games. How might this algorithmic bias affect the solutions that arise and could we define a blueprint for problems that are amenable to gamification? By exploring this direction, we may gain a fresh perspective on machine learning with distinct advantages to the current dominant optimization paradigm.

Jan Feyereisl · Olga Afanasjeva · Jitka Cejkova · Martin Poliak · Mark Sandler · Max Vladymyrov

In natural systems learning and adaptation occurs at multiple levels and often involves interaction between multiple independent agents. Examples include cell-level self-organization, brain plasticity, and complex societies of biological organisms that operate without a system-wide objective. All these systems exhibit remarkably similar patterns of learning through local interaction. On the other hand, most existing approaches to AI, though inspired by biological systems at the mechanistic level, usually ignore this aspect of collective learning, and instead optimize a global, hand-designed and usually fixed loss function in isolation. We posit there is much to be learned and adopted from natural systems, in terms of how learning happens in these systems through collective interactions across scales (starting from single cells, through complex organisms up to groups and societies). The goal of this workshop is to explore both natural and artificial systems and see how they can (or already do) lead to the development of new approaches to learning that go beyond the established optimization or game-theoretic views. The specific topics that we plan to solicit include, but are not limited to: learning leveraged through collectives, biological and otherwise (emergence of learning, swarm intelligence, applying high-level brain features such as fast/slow thinking to AI …

Mikhail Noukhovitch · Roberto Dessi · Agnieszka Słowik · Kevin Denamganai · Niko Grupen · Mathieu Rita · Florian Strub

Emergent Communication (EC) studies learning to communicate by interacting with other agents to solve collaborative tasks. There is a long history of EC for linguistics and the study of language evolution but following deep learning breakthroughs, there has been an explosion in deep EC research. Early work focused on learning more complex and effective protocols for MARL but recent research has expanded scope: inductive biases, population structures, measurements, and evolutionary biology. In parallel, new research has used EC and its paradigm for practical applications in NLP, video games, and even networking. EC has significant potential to impact a wide range of disciplines both within AI (e.g. MARL, visual-question answering, explainability, robotics) and beyond (e.g. social linguistics, cognitive science, philosophy of language) so the goal of this workshop is to push the boundaries of EC as a field and methodology. To achieve this, we are proposing a novel, discussion-focused workshop format and assembling speakers from ML to CogSci to Philosophy and the Arts. Our goal is to create a space for an interdisciplinary community, open new frontiers, and foster future research collaboration.

Torsten Scholak · Gabriel Orlanski · Disha Shrivastava · Arun Raja · Dzmitry Bahdanau · Jonathan Herzig

An exciting application area of machine learning and deep learning methods is completion, repair, synthesis, and automatic explanation of program code. This field has received a fair amount of attention in the last decade, yet arguably the recent application of large scale language modelling techniques to the domain of code holds a tremendous promise to completely revolutionize this area. The new large pretrained models excel at completing code and synthesizing code from natural language descriptions; they work across a wide range of domains, tasks, and programming languages. The excitement about new possibilities is spurring tremendous interest in both industry and academia. Yet, we are just beginning to explore the potential of large-scale deep learning for code, and state-of-the-art models still struggle with correctness and generalization. This calls for platforms to exchange ideas and discuss the challenges in this line of work. Deep Learning for Code (DL4C) is a workshop that will provide a platform for researchers to share their work on deep learning for code.DL4C welcomes researchers interested in a number of topics, including but not limited to: AI code assistants, representations and model architectures for code, pretraining methods, methods for producing code from natural language, static code analysis and …

Rishabh Agarwal · Stephanie Chan · Xavier Bouthillier · Caglar Gulcehre · Jesse Dodge

The aim of the workshop is to discuss and propose standards for evaluating ML research, in order to better identify promising new directions and to accelerate real progress in the field of ML research. The problem requires understanding the kinds of practices that add or detract from the generalizability or reliability of results reported, and incentives for researchers to follow best practices. We may draw inspiration from adjacent scientific fields, from statistics, or history of science. Acknowledging that there is no consensus on best practices for ML, the workshop will have a focus on panel discussions and a few invited talks representing a variety of perspectives. The call to papers will welcome opinion papers as well as more technical papers on evaluation of ML methods. We plan to summarize the findings and topics that emerged during our workshop in a short report.

Call for Papers:
Submission Site:

Perouz Taslakian · Pierre-André Noël · David Vazquez · Jian Tang · Xavier Bresson

Recent advances in Machine Learning (ML) have revolutionized our ability to solve complex problems in a myriad of application domains. Yet, just as empirical data plays a fundamental role in the development of such applications, the process of designing these methods has also remained empirical: we have learned which of the known methods tend to perform better for certain types of problems, and have developed intuition guiding our discovery of new methods.

In contrast, classical algorithmic theory provides tools directly addressing the mathematical core of a problem, and clear theoretical justifications motivate powerful design techniques. At the heart of this process is the analysis of the correctness and time/space efficiency of an algorithm, providing actionable bounds and guarantees. Problems themselves may be characterized by bounding the performance of any algorithm, providing a meaningful reference point to which concrete algorithms may be compared. While ML models may appear to be an awkward fit for such techniques, some research in the area has succeeded in obtaining results with the “definitive” flavour associated with algorithms, complementary to empirical ones. Are such discoveries bound to be exceptions, or can they be part of a new algorithmic theory?

The GoundedML workshop seeks to bring together …

Chaowei Xiao · Huan Zhang · Xueru Zhang · Hongyang Zhang · Cihang Xie · Beidi Chen · Xinchen Yan · Yuke Zhu · Bo Li · Zico Kolter · Dawn Song · Anima Anandkumar

Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., financial analytics and autonomous driving). Recently, the concept of foundation models has received significant attention in the ML community, which refers to the rise of models (e.g., BERT, GPT-3) that are trained on large-scale data and work surprisingly well in a wide range of downstream tasks. While there are many opportunities regarding foundation models, ranging from capabilities (e.g., language, vision, robotics, reasoning, human interaction), applications (e.g., law, healthcare, education, transportation), and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations), concerns and risks have been incurred that the models can inflict harm if they are not developed or used with care. It has been well-documented that ML models can:-Inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups;-Be vulnerable to security and privacy attacks that deceive the models and leak sensitive information of training data;-Make hard-to-justify predictions with a lack of transparency and interpretability.This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, …

Esube Bekele · Celia Cintas · Timnit Gebru · Judy Gichoya · Meareg Hailemariam · Waheeda Saib · Girmaw Abebe Tadesse

The constant progress being made in artificial intelligence needs to extend across borders if we are to democratize AI in developing countries. Adapting the state-of-the-art (SOTA) methods to resource-constrained environments such as developing countries is challenging in practice. Recent breakthroughs in natural language processing (NLP), for instance, rely on increasingly complex and large models (e.g. most models based on transformers such as BERT, VilBERT, ALBERT, and GPT-2) that are pre-trained in on large corpus of unlabeled data. In most developing countries, low/limited resources mean a hard path towards the adoption of these breakthroughs. Methods such as transfer learning will not fully solve the problem either due to bias in pre-training datasets that do not reflect real test cases in developing countries as well as the prohibitive cost of fine-tuning these large models. Recent progress with focus given to ML for social good has the potential to alleviate the problem in part. However, the themes in such workshops are usually application-driven such as ML for healthcare and for education, and less attention is given to practical aspects as it relates to developing countries in implementing these solutions in low or limited resource scenarios. This, in turn, hinders the democratization of AI …

Lingfei Wu · Bang Liu · Rada Mihalcea · Jian Pei · Yue Zhang · Yunyao Li

There are a rich variety of NLP problems that can be best expressed with graph structures. Due to the great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graphs techniques (i.e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems, and have already achieved great success. As a result, there is a new wave of research at the intersection of deep learning on graphs and NLP which has influenced a variety of NLP tasks, ranging from classification tasks like sentence classification, semantic role labeling, and relation extraction, to generation tasks like machine translation, question generation, and summarization. Despite these successes, deep learning on graphs for NLP still faces many challenges, including but not limited to 1) automatically transforming original text into highly graph-structured data, 2) graph representation learning for complex graphs (e.g., multi-relational graphs, heterogeneous graphs), 3) learning the mapping between complex data structures (e.g., Graph2Seq, Graph2Tree, Graph2Graph).

This workshop aims to bring together both academic researchers and industrial practitioners from different backgrounds and perspectives to the above challenges. This workshop intends to share visions of investigating new approaches and methods at the intersection of graph machine learning and NLP. …

Pascal Notin · Stefan Bauer · Andrew Jesson · Yarin Gal · Patrick Schwab · Debora Marks · Sonali Parbhoo · Ece Ozkan · Clare Lyle · Ashkan Soleymani · Júlia Domingo · Arash Mehrjou · Melanie Fernandez Pradier · Anna Bauer-Mehren · Max Shen

We are at a pivotal moment in healthcare characterized by unprecedented scientific and technological progress in recent years together with the promise of personalized medicine to radically transform the way we provide care to patients. However, drug discovery has become an increasingly challenging endeavour: not only has the success rate of developing new therapeutics been historically low, but this rate has been steadily declining. The average cost to bring a new drug to market is now estimated at 2.6 billion – 140% higher than a decade earlier. Machine learning-based approaches present a unique opportunity to address this challenge. While there has been growing interest and pioneering work in the machine learning (ML) community over the past decade, the specific challenges posed by drug discovery are largely unknown by the broader community. We would like to organize a workshop on ‘Machine Learning for Drug Discovery’ (MLDD) at ICLR 2022 with the ambition to federate the community interested in this application domain where i) ML can have a significant positive impact for the benefit of all and ii) the application domain can drive ML method development through novel problem settings, benchmarks and testing grounds at the intersection of many subfields ranging representation, …

Yuanqi Du · Adji Dieng · Yoon Kim · Rianne van den Berg · Yoshua Bengio

Deep generative models are at the core of research in artificial intelligence, especially for unlabelled data. They have achieved remarkable performance in domains including computer vision, natural language processing, speech recognition, and audio synthesis. Very recently, deep generative models have been applied to broader domains, e.g. fields of science including the natural sciences, physics, chemistry and molecular biology, and medicine. However, deep generative models still face challenges when applied to these domains from which arise highly structured data. This workshop aims to bring experts from different backgrounds and perspectives to discuss the applications of deep generative models to these data modalities. The workshop will put an emphasis on challenges in encoding domain knowledge when learning representations, performing synthesis, or for prediction purposes. Since evaluation is essential for benchmarking, the workshop will also be a platform for discussing rigorous ways to evaluate representations and synthesis.

Young Min Kim · Sergey Levine · Ming Lin · Tongzhou Mu · Ashvin Nair · Hao Su

While the study of generalization has played an essential role in many application domains of machine learning (e.g., image recognition and natural language processing), it did not receive the same amount of attention in common frameworks of policy learning (e.g., reinforcement learning and imitation learning) at the early stage for reasons such as policy optimization is difficult and benchmark datasets are not quite ready yet. Generalization is particularly important when learning policies to interact with the physical world. The spectrum of such policies is broad: the policies can be high-level, such as action plans that concern temporal dependencies and causalities of environment states; or low-level, such as object manipulation skills to transform objects that are rigid, articulated, soft, or even fluid.In the physical world, an embodied agent can face a number of changing factors such as \textbf{physical parameters, action spaces, tasks, visual appearances of the scenes, geometry and topology of the objects}, etc. And many important real-world tasks involving generalizable policy learning, e.g., visual navigation, object manipulation, and autonomous driving. Therefore, learning generalizable policies is crucial to developing intelligent embodied agents in the real world. Though important, the field is very much under-explored in a systematic way.Learning generalizable policies in …

Natasha Dudek · Karianne Bergen · Stewart Jamieson · Valentin Tertius Bickel · Will Chapman · Johanna Hansen

When will the San Andreas faultline next experience a massive earthquake? What can be done to reduce human exposure to zoonotic pathogens such as coronaviruses and schistosomiasis? How can robots be used to explore other planets in the search for extraterrestrial life? AI is posed to play a critical role in answering Earth and Space Sciences questions such as these, boosted by continually expanding, massive volumes of data from geo-scientific sensors, remote sensing data from satellites and space probes, and simulated data from high performance climate and weather simulations. The complexity of these datasets, however, poses an inherent challenge to AI, as they are often noisy, may contain time and/or geographic dependencies, and require substantial interdisciplinary expertise to collect and interpret.This workshop aims to highlight work being done at the intersection of AI and the Earth and Space Sciences, with a special focus on model interpretability at the ICLR 2022 iteration of the workshop (formerly held at ICLR 2020 and NeurIPS 2020). Notably, we do not focus on climate change as this specialized topic is addressed elsewhere and our scope is substantially broader. We showcase cutting-edge applications of machine learning to Earth and Space Science problems, including study of the …

Rosanne Liu · Krystal Maughan · Thomas F Burns · Ching Lam Choi · Arun Raja

A whole-day event celebrating and summarizing our progress on the "Broadening our Call for Participation to ICLR 2022" initiative. The goal of this workshop is to reflect, document, and celebrate projects initiated from the CSS initiative and plan our roads forward.

Hao Wang · Wanyu LIN · Hao He · Di Wang · Chengzhi Mao · Muhan Zhang

In these years, we have seen principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe. Specifically, Data Privacy, Accountability, Interpretability, {\bf R}obustness, and Reasoning have been broadly recognized as fundamental principles of using machine learning (ML) technologies on decision-critical and/or privacy-sensitive applications. On the other hand, in tremendous real-world applications, data itself can be well represented as various structured formalisms, such as graph-structured data (e.g., networks), grid-structured data (e.g., images), sequential data (e.g., text), etc. By exploiting the inherently structured knowledge, one can design plausible approaches to identify and use more relevant variables to make reliable decisions, thereby facilitating real-world deployments.In this workshop, we will examine the research progress towards accountable and ethical use of AI from diverse research communities, such as the ML community, security \& privacy community, and more. Specifically, we will focus on the limitations of existing notions on Privacy, Accountability, Interpretability, Robustness, and Reasoning. We aim to bring together researchers from various areas (e.g., ML, security \& privacy, computer vision, and healthcare) to facilitate discussions including related challenges, definitions, formalisms, and evaluation protocols regarding the accountable and ethical use of ML technologies in high-stake applications with structured …