Skip to yearly menu bar Skip to main content


Timezone: UTC
Filter Events
Invited Talk
12:00 AM - 1:15 AM

A connectome represents brain connectivity as a directed graph in which nodes are neurons and edges are synapses. The connectome of C. elegans was reconstructed from electron microscopic images in the 1970s and 80s, but the manual labor of image analysis was prohibitive. Convolutional nets were applied to automate image analysis starting in the 2000s, and are now the basis of computational systems engineered to handle petascale datasets.

The connectome of the fruit fly Drosophila is expected in 2023. Cubic millimeter volumes of cerebral cortex have also been reconstructed. The explosion of connectomic information is revealing innate structures of nervous systems, and is expected to constrain theories of how brains learn. An exascale project to reconstruct an entire mouse brain connectome is now being planned, and depends on improving the accuracy of automated image analysis by confronting a long tail of failure modes, including diverse kinds of image artifacts.

... more
Speaker Bio
Sebastian Seung is Head of Samsung Research and Anthony B. Evnin Professor in the Neuroscience Institute and Computer Science Department at Princeton University. Over the past 15 years, he helped pioneer the new field of connectomics, applying deep learning and crowdsourcing to reconstruct neural circuits from electron microscopic images. He is one of the creators of FlyWire, an online community that is currently proofreading the Drosophila connectome, and led image analysis for the recent petascale reconstruction of mouse visual cortex by the MICrONS Consortium. His book Connectome: How the Brain's Wiring Makes Us Who We Are was chosen by the Wall Street Journal as Top Ten Nonfiction of 2012. Before joining the Princeton faculty, Seung trained at Harvard and Hebrew Universities, worked at Bell Laboratories (1992-8), and taught at the Massachusetts Institute of Technology (1998-2013). He is External Member of the Max Planck Society, and winner of the 2008 Ho-Am Prize in Engineering.
... more
Poster
1:30 AM - 3:30 AM
70 Events in this session
Shiori Sagawa · Pang Wei Koh · Tony Lee · Irena Gao · Sang Michael Xie · Kendrick Shen · Ananya Kumar · Weihua Hu · Michihiro Yasunaga · Henrik Marklund · Sara Beery · Etienne David · Ian Stavness · Wei Guo · Jure Leskovec · Kate Saenko · Tatsunori Hashimoto · Sergey Levine · Chelsea Finn · Percy Liang
Krzysztof Choromanski · Han Lin · Haoxian Chen · Arijit Sehanobish · Yuanzhe Ma · Deepali Jain · Jake Varley · Andy Zeng · Michael Ryoo · Valerii Likhosherstov · Dmitry Kalashnikov · Vikas Sindhwani · Adrian Weller
Yiping Lu · Haoxuan Chen · Jianfeng Lu · Lexing Ying · Jose Blanchet
Bryan Plummer · Nikoli Dryden · Julius Frost · Torsten Hoefler · Kate Saenko
Pranjal Awasthi · Abhimanyu Das · Rajat Sen · Ananda Suresh
Byungseok Roh · JaeWoong Shin · Wuhyun Shin · Saehoon Kim
Juntang Zhuang · Boqing Gong · Liangzhe Yuan · Yin Cui · Hartwig Adam · Nicha C Dvornek · sekhar tatikonda · James s Duncan · Ting Liu
Chencheng Xu · Zhiwei Hong · Minlie Huang · Tao Jiang
Yaohua Wang · Yaobin Zhang · Fangyi Zhang · Senzhang Wang · Ming Lin · Yuqi Zhang · Xiuyu Sun
Yonggang Zhang · Mingming Gong · Tongliang Liu · Gang Niu · Xinmei Tian · Bo Han · Bernhard Schoelkopf · Kun Zhang
Yucheng Lu · Si Yi Meng · Christopher De Sa
Bingbin Liu · Elan Rosenfeld · Pradeep K Ravikumar · Andrej Risteski
Boshi Wang · Jialin Yi · Hang Dong · Bo Qiao · Chuan Luo · Qingwei Lin
Kangjie Chen · Yuxian Meng · Xiaofei Sun · Shangwei Guo · Tianwei Zhang · Jiwei Li · Chun Fan
Tan Yu · Jun Li · YUNFENG CAI · Ping Li
Kensen Shi · Hanjun Dai · Kevin Ellis · Charles Sutton
Chen-Hao Chao · Wei-Fang Sun · Bo-Wun Cheng · Yi-Chen Lo · Chia-Che Chang · Yu-Lun Liu · Yu-Lin Chang · Chia-Ping Chen · Chun-Yi Lee
Allan Zhou · Fahim Tajwar · Alexander Robey · Tom Knowles · George Pappas · Hamed Hassani · Chelsea Finn
Huiyun Yang · Huadong Chen · Hao Zhou · Lei Li
Lingjie Mei · Jiayuan Mao · Ziqi Wang · Chuang Gan · Joshua B Tenenbaum
Yiming Li · Haoxiang Zhong · Xingjun Ma · Yong Jiang · Shu-Tao Xia
Chunwei Ma · Ziyun Huang · Mingchen Gao · Jinhui Xu
Zhimeng Jiang · Xiaotian Han · Chao Fan · Fan Yang · Ali Mostafavi · Xia Hu
Yuge Shi · Jeffrey Seely · Philip Torr · Siddharth N · Awni Hannun · Nicolas Usunier · Gabriel Synnaeve
Armen Aghajanyan · Dmytro Okhonko · Mike Lewis · Mandar Joshi · Hu Xu · Gargi Ghosh · Luke Zettlemoyer
Yingjie Wang · Xianrui Zhong · Fengxiang He · Hong Chen · Dacheng Tao
Chieh Hubert Lin · Hsin-Ying Lee · Yen-Chi Cheng · Sergey Tulyakov · Ming-Hsuan Yang
Wei Deng · Siqi Liang · Botao Hao · Guang Lin · Faming Liang
A. Tuan Nguyen · Toan Tran · Yarin Gal · Philip Torr · Atilim Gunes Baydin
Seohong Park · Jongwook Choi · Jaekyeom Kim · Honglak Lee · Gunhee Kim
Edward Hu · yelong shen · Phillip Wallis · Zeyuan Allen-Zhu · Yuanzhi Li · Shean Wang · Lu Wang · Weizhu Chen
Youngmin Oh · Jinwoo Shin · Eunho Yang · Sung Ju Hwang
Sen Lin · Jialin Wan · Tengyu Xu · Yingbin Liang · Junshan Zhang
Rahul Ramesh · Pratik A Chaudhari
Josh Gardner · Ian Simon · Ethan Manilow · Curtis Hawthorne · Jesse Engel
Chengyue Gong · Dilin Wang · Meng Li · Xinlei Chen · Zhicheng Yan · Yuandong Tian · Qiang Liu · Vikas Chandra
Evan Hernandez · Sarah Schwettmann · David Bau · Teona Bagashvili · Antonio Torralba · Jacob Andreas
Shixiang Zhu · Haoyun Wang · Zheng Dong · Xiuyuan Cheng · Yao Xie
Naman Agarwal · Syomantak Chaudhuri · Prateek Jain · Dheeraj Nagaraj · Praneeth Netrapalli
Ningyu Zhang · Zhen Bi · Xiaozhuan Liang · Siyuan Cheng · Haosen Hong · Shumin Deng · Qiang Zhang · Jiazhang Lian · Huajun Chen
Aahlad Puli · Lily Zhang · Eric Oermann · Rajesh Ranganath
Xinshi Chen · Haoran Sun · Le Song
Shengyao Lu · Bang Liu · Keith G Mills · SHANGLING JUI · Di Niu
Zhi Zhang · Zhuoran Yang · Han Liu · Pratap Tokekar · Furong Huang
Tao Huang · Zekang Li · Hua Lu · Yong Shan · Shusheng Yang · Yang Feng · Fei Wang · Shan You · Chang Xu
Jianing ZHU · Jiangchao Yao · Bo Han · Jingfeng Zhang · Tongliang Liu · Gang Niu · Jingren Zhou · Jianliang Xu · Hongxia Yang
Yu Yao · Tongliang Liu · Bo Han · Mingming Gong · Gang Niu · Masashi Sugiyama · Dacheng Tao
Yifan Gong · Yuguang Yao · Yize Li · Yimeng Zhang · Xiaoming Liu · Xue Lin · Sijia Liu
Taesung Kim · Jinhee Kim · Yunwon Tae · Cheonbok Park · Jang-Ho Choi · Jaegul Choo
Navid Kardan · Mubarak Shah · Mitchell Hill
Albert Cheu · Matthew Joseph · Jieming Mao · Binghui Peng
Ayan Das · Yongxin Yang · Timothy Hospedales · Tao Xiang · Yi-Zhe Song
Zhang-Wei Hong · Tao Chen · Yen-Chen Lin · Joni Pajarinen · Pulkit Agrawal
Xiaoyu Chen · Jiachen Hu · Chi Jin · Lihong Li · Liwei Wang
Changho Shin · Winfred Li · Harit Vishwakarma · Nicholas Roberts · Frederic Sala
Kohei Miyaguchi · Takayuki Katsuki · Akira Koseki · Toshiya Iwamori
Jiahui Yu · Xin Li · Jing Yu Koh · Han Zhang · Ruoming Pang · James Qin · Alexander Ku · Yuanzhong Xu · Jason Baldridge · Yonghui Wu
Go to Event Page
Remarks

Closing Remarks

Yan Liu · Katja Hofmann · Feryal Behbahani · Vukosi Marivate
7:00 AM - 7:30 AM

Thank you for a wonderful virtual conference of ICLR 2022. We provide a quick overview of ICLR 2022 activities, describe the workshop selection processes and introduce the workshops on Friday, and finally reveal our plan of ICLR 2023.

... more
Workshop

Workshop on the Elements of Reasoning: Objects, Structure and Causality

Sungjin Ahn · Wilka Carvalho · Klaus Greff · Tong He · Thomas Kipf · Francesco Locatello · Sindy Löwe
7:00 AM - 6:50 PM

Discrete abstractions such as objects, concepts, and events are at the basis of our ability to perceive the world, relate the pieces in it, and reason about their causal structure. The research communities of object-centric representation learning and causal machine learning, have – largely independently – pursued a similar agenda of equipping machine learning models with more structured representations and reasoning capabilities. Despite their different languages, these communities have similar premises and overall pursue the same benefits. They operate under the assumption that, compared to a monolithic/black-box representation, a structured model will improve systematic generalization, robustness to distribution shifts, downstream learning efficiency, and interpretability. Both communities typically approach the problem from opposite directions. Work on causality often assumes a known (true) decomposition into causal factors and is focused on inferring and leveraging interactions between them. Object-centric representation learning, on the other hand, typically starts from an unstructured input and aims to infer a useful decomposition into meaningful factors, and has so far been less concerned with their interactions.This workshop aims to bring together researchers from object-centric and causal representation learning. To help integrate ideas from these areas, we invite perspectives from the other fields including cognitive psychology and neuroscience. We hope that this creates opportunities for discussion, presenting cutting-edge research, establishing new collaborations and identifying future research directions.

... more
Workshop

Workshop on Agent Learning in Open-Endedness

Minqi Jiang · Jack Parker-Holder · Michael D Dennis · Mikayel Samvelyan · Roberta Raileanu · Jakob Foerster · Edward Grefenstette · Tim Rocktaeschel
8:45 AM - 8:15 PM

Open-ended learning processes that co-evolve agents and their environments resulted in human intelligence, but producing such a system, which generates endless, meaningful novelty, remains an open problem in AI research. We hope our workshop provides a forum both for bridging knowledge across a diverse set of relevant fields as well as sparking new insights that can enable agent learning in open-endedness.

... more
Workshop

AfricaNLP 2022: NLP for African languages

David Adelani · Angela Fan · Jade Abbott · Perez Ogayo · Hady Elsahar · Salomey Osei · Mohamed Ahmed · Constantine Lignos · shamsuddeen muhammad
9:00 AM - 6:30 PM

Africa has over 2000 languages and yet is one of the least represented in NLP research. The rise in ML community efforts on the African continent has led to a vibrant NLP community. This interest is manifesting in the form of national, regional, continental and even global collaborative efforts focused on African languages, African corpora, and tasks with importance to the African context. Starting in 2020, the AfricaNLP workshop has become a core event for the African NLP community. Many of the participants are active in the Masakhane grassroots NLP community members, allowing the community to convene, showcase and share experiences with each other. Many first-time authors, through the mentorship programme, found collaborators and published their first paper. Those mentorship relationships built trust and coherence within the community that continues to this day. We aim to continue this.Large scale collaborative works have been enabled by participants who joined from the AfricaNLP workshop such as MasakhaNER (61 authors), Quality assessment of Multilingual Datasets (51 authors), Corpora Building for Twi (25 authors), NLP for Ghanaian Languages (25 Authors).This workshop follows the previous successful edition in 2020 and 2021 co-located with ICLR and EACL respectively.

... more
Workshop

Wiki-M3L: Wikipedia and Multimodal & Multilingual Research

Miriam Redi · Yannis Kalantidis · Krishna Srinivasan · Yacine Jernite · Tiziano Piccardi · Diane Larlus · Stéphane Clinchant · Lucie-Aimée Kaffee
10:00 AM - 8:20 PM

In the broader AI research community, Wikipedia data has been utilized as part of the training datasets for (multilingual) language models like BERT for many years. However, its content is still a largely untapped resource for vision and multimodal learning systems.Aside from a few recent cases, most vision and language efforts either work on narrow domains and small vocabularies and/or are available for English only, thus limiting the diversity of perspectives and audiences incorporated by these technologies. Recently, we see methods leveraging large data for multi-modal pretraining, and Wikipedia is one of the few open resources central to that effort.With this workshop, we propose to offer a space to bring together the community of vision, language and multilingual learning researchers, as well as members of the Wikimedia community, to discuss how these two groups can help and support each other. We will explore existing aspects and new frontiers of multilingual understanding of vision and language, focusing on the unique nature of Wikimedia’s mission: to bring free knowledge to the whole world equally.Beside invited talks and panel discussions, our workshop will present the winning entries of an ongoing Wikimedia-led, large-scale challenge on multilingual, multimodal image-text retrieval. Using the publicly available Wikipedia-based ImageText (WIT) dataset which contains 37 Million image-text sets across 108 languages, we will be presenting the benchmark and the top methods along a disaggregated set of performance, fairness, and efficiency metrics.

... more
Workshop

Deep Learning for Code

Torsten Scholak · Gabriel Orlanski · Disha Shrivastava · Arun Raja · Dzmitry Bahdanau · Jonathan Herzig
12:00 PM - 9:15 PM

An exciting application area of machine learning and deep learning methods is completion, repair, synthesis, and automatic explanation of program code. This field has received a fair amount of attention in the last decade, yet arguably the recent application of large scale language modelling techniques to the domain of code holds a tremendous promise to completely revolutionize this area. The new large pretrained models excel at completing code and synthesizing code from natural language descriptions; they work across a wide range of domains, tasks, and programming languages. The excitement about new possibilities is spurring tremendous interest in both industry and academia. Yet, we are just beginning to explore the potential of large-scale deep learning for code, and state-of-the-art models still struggle with correctness and generalization. This calls for platforms to exchange ideas and discuss the challenges in this line of work. Deep Learning for Code (DL4C) is a workshop that will provide a platform for researchers to share their work on deep learning for code.DL4C welcomes researchers interested in a number of topics, including but not limited to: AI code assistants, representations and model architectures for code, pretraining methods, methods for producing code from natural language, static code analysis and evaluation of deep learning for code techniques.

... more
Workshop

Emergent Communication: New Frontiers

Michael Noukhovitch · Roberto Dessi · Agnieszka Słowik · Kevin Denamganai · Niko Grupen · Mathieu Rita · Florian Strub
12:00 PM - 9:00 PM

Emergent Communication (EC) studies learning to communicate by interacting with other agents to solve collaborative tasks. There is a long history of EC for linguistics and the study of language evolution but following deep learning breakthroughs, there has been an explosion in deep EC research. Early work focused on learning more complex and effective protocols for MARL but recent research has expanded scope: inductive biases, population structures, measurements, and evolutionary biology. In parallel, new research has used EC and its paradigm for practical applications in NLP, video games, and even networking. EC has significant potential to impact a wide range of disciplines both within AI (e.g. MARL, visual-question answering, explainability, robotics) and beyond (e.g. social linguistics, cognitive science, philosophy of language) so the goal of this workshop is to push the boundaries of EC as a field and methodology. To achieve this, we are proposing a novel, discussion-focused workshop format and assembling speakers from ML to CogSci to Philosophy and the Arts. Our goal is to create a space for an interdisciplinary community, open new frontiers, and foster future research collaboration.

... more
Workshop

Setting up ML Evaluation Standards to Accelerate Progress

Rishabh Agarwal · Stephanie Chan · Xavier Bouthillier · Caglar Gulcehre · Jesse Dodge
12:00 PM - 10:35 PM

The aim of the workshop is to discuss and propose standards for evaluating ML research, in order to better identify promising new directions and to accelerate real progress in the field of ML research. The problem requires understanding the kinds of practices that add or detract from the generalizability or reliability of results reported, and incentives for researchers to follow best practices. We may draw inspiration from adjacent scientific fields, from statistics, or history of science. Acknowledging that there is no consensus on best practices for ML, the workshop will have a focus on panel discussions and a few invited talks representing a variety of perspectives. The call to papers will welcome opinion papers as well as more technical papers on evaluation of ML methods. We plan to summarize the findings and topics that emerged during our workshop in a short report.

Call for Papers: https://ml-eval.github.io/call-for-papers/
Submission Site: https://cmt3.research.microsoft.com/SMILES2022

... more
Workshop

From Cells to Societies: Collective Learning Across Scales

Jan Feyereisl · Olga Afanasjeva · Jitka Cejkova · Martin Poliak · Mark Sandler · Max Vladymyrov
12:00 PM - 10:05 PM

In natural systems learning and adaptation occurs at multiple levels and often involves interaction between multiple independent agents. Examples include cell-level self-organization, brain plasticity, and complex societies of biological organisms that operate without a system-wide objective. All these systems exhibit remarkably similar patterns of learning through local interaction. On the other hand, most existing approaches to AI, though inspired by biological systems at the mechanistic level, usually ignore this aspect of collective learning, and instead optimize a global, hand-designed and usually fixed loss function in isolation. We posit there is much to be learned and adopted from natural systems, in terms of how learning happens in these systems through collective interactions across scales (starting from single cells, through complex organisms up to groups and societies). The goal of this workshop is to explore both natural and artificial systems and see how they can (or already do) lead to the development of new approaches to learning that go beyond the established optimization or game-theoretic views. The specific topics that we plan to solicit include, but are not limited to: learning leveraged through collectives, biological and otherwise (emergence of learning, swarm intelligence, applying high-level brain features such as fast/slow thinking to AI systems, self-organization in AI systems, evolutionary approaches to AI systems, natural induction), social and cultural learning in AI (cultural ratchet, cumulative cultural evolution, formulation of corresponding meta-losses and objectives, new methods for loss-free learning)

... more
Workshop

Geometrical and Topological Representation Learning

Alexander Cloninger · Manohar Kaul · Ira Ktena · Nina Miolane · Bastian Rieck · Guy Wolf
12:00 PM - 9:00 PM

Over the past two decades, high-throughput data collection technologies have become commonplace in most fields of science and technology, and with them an ever-increasing amount of big high dimensional data is being generated by virtually every real-world system. While such data systems are highly diverse in nature, the underlying data analysis and exploration tasks give rise to common challenges at the core of modern representation learning. For example, even though modern real-world data typically exhibit high-dimensional ambient measurement spaces, they often exhibit low-dimensional intrinsic structures that can be uncovered by geometry-oriented methods, such as the ones encountered in manifold learning, graph signal processing, geometric deep learning, and topological data analysis. As a result, recent years have seen significant interest and progress in geometric and topological approaches to representation learning, thus enabling tractable exploratory analysis by domain experts who frequently do not have a strong computational background.Motivation. Despite increased interest in the aforementioned methods, there is no forum in which to present work in progress to get the feedback of the machine learning community. Knowing the diverse backgrounds of researchers visiting ICLR, we consider this venue to be the perfect opportunity to bring together domain experts, practitioners, and researchers that are developing the next-generation computational methods. In our opinion, such discussions need to be held in an inclusive setting, getting feedback from different perspectives to improve the work and advance the state of the art. Our workshop provides a unique forum for disseminating (preliminary) research in fields that are not yet fully covered by the main conference. Our overarching goal is to deepen our understanding of challenges/opportunities, while breaking barriers between disjoint communities, emphasizing collaborative efforts in different domains.

... more
Workshop

Gamification and Multiagent Solutions

Andrea Tacchetti · Ian Gemp · Elise van der Pol · Arash Mehrjou · Satpreet H Singh · Noah Golowich · Sarah Perrin · Nina Vesseron
12:00 PM - 10:00 PM

Can we reformulate machine learning from the ground up with multiagent in mind? Modern machine learning primarily takes an optimization-first, single-agent approach, however, many of life’s intelligent systems are multiagent in nature across a range of scales and domains such as market economies, ant colonies, forest ecosystems, and decentralized energy grids.

Generative adversarial networks represent one of the most recent successful deviations from the dominant single-agent paradigm by formulating generative modeling as a two-player, zero-sum game. Similarly, a few recent methods formulating root node problems of machine learning and data science as games among interacting agents have gained recognition (PCA, NMF). Multiagent designs are typically distributed and decentralized which leads to robust and parallelizable learning algorithms.

We want to bring together a community of people that wants to revisit machine learning problems and reformulate them as solutions to games. How might this algorithmic bias affect the solutions that arise and could we define a blueprint for problems that are amenable to gamification? By exploring this direction, we may gain a fresh perspective on machine learning with distinct advantages to the current dominant optimization paradigm.

... more
Workshop

Socially Responsible Machine Learning

Chaowei Xiao · Huan Zhang · Xueru Zhang · Hongyang Zhang · Cihang Xie · Beidi Chen · Xinchen Yan · Yuke Zhu · Bo Li · Zico Kolter · Dawn Song · Anima Anandkumar
12:45 PM - 10:00 PM

Machine learning (ML) systems have been increasingly used in many applications, ranging from decision-making systems (e.g., automated resume screening and pretrial release tool) to safety-critical tasks (e.g., financial analytics and autonomous driving). Recently, the concept of foundation models has received significant attention in the ML community, which refers to the rise of models (e.g., BERT, GPT-3) that are trained on large-scale data and work surprisingly well in a wide range of downstream tasks. While there are many opportunities regarding foundation models, ranging from capabilities (e.g., language, vision, robotics, reasoning, human interaction), applications (e.g., law, healthcare, education, transportation), and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations), concerns and risks have been incurred that the models can inflict harm if they are not developed or used with care. It has been well-documented that ML models can:-Inherit pre-existing biases and exhibit discrimination against already-disadvantaged or marginalized social groups;-Be vulnerable to security and privacy attacks that deceive the models and leak sensitive information of training data;-Make hard-to-justify predictions with a lack of transparency and interpretability.This workshop aims to build connections by bringing together both theoretical and applied researchers from various communities (e.g., machine learning, fairness & ethics, security, privacy, etc.). In particular, we are interested in the following topics:-The intersection of various aspects of trustworthy ML: fairness, transparency, interpretability, privacy, robustness;-The possibility of using the most recent theory to inform practice guidelines for deploying trustworthy ML systems;-Automatically detect, verify, explain, and mitigate potential biases or privacy problems in existing models;-Explaining the social impacts of machine learning bias.

... more
Workshop

GroundedML: Anchoring Machine Learning in Classical Algorithmic Theory

Perouz Taslakian · Pierre-André Noël · David Vazquez · Jian Tang · Xavier Bresson
12:45 PM - 9:30 PM

Recent advances in Machine Learning (ML) have revolutionized our ability to solve complex problems in a myriad of application domains. Yet, just as empirical data plays a fundamental role in the development of such applications, the process of designing these methods has also remained empirical: we have learned which of the known methods tend to perform better for certain types of problems, and have developed intuition guiding our discovery of new methods.

In contrast, classical algorithmic theory provides tools directly addressing the mathematical core of a problem, and clear theoretical justifications motivate powerful design techniques. At the heart of this process is the analysis of the correctness and time/space efficiency of an algorithm, providing actionable bounds and guarantees. Problems themselves may be characterized by bounding the performance of any algorithm, providing a meaningful reference point to which concrete algorithms may be compared. While ML models may appear to be an awkward fit for such techniques, some research in the area has succeeded in obtaining results with the “definitive” flavour associated with algorithms, complementary to empirical ones. Are such discoveries bound to be exceptions, or can they be part of a new algorithmic theory?

The GoundedML workshop seeks to bring together researchers from both the algorithmic theory and machine learning communities, starting a dialogue on how ideas from theoretical algorithm design can inspire and guide future research in machine learning.

... more
Workshop

Deep Learning on Graphs for Natural Language Processing

Lingfei Wu · Bang Liu · Rada Mihalcea · Jian Pei · Yue Zhang · Yunyao Li
1:00 PM - 10:00 PM

There are a rich variety of NLP problems that can be best expressed with graph structures. Due to the great power in modeling non-Euclidean data like graphs or manifolds, deep learning on graphs techniques (i.e., Graph Neural Networks (GNNs)) have opened a new door to solving challenging graph-related NLP problems, and have already achieved great success. As a result, there is a new wave of research at the intersection of deep learning on graphs and NLP which has influenced a variety of NLP tasks, ranging from classification tasks like sentence classification, semantic role labeling, and relation extraction, to generation tasks like machine translation, question generation, and summarization. Despite these successes, deep learning on graphs for NLP still faces many challenges, including but not limited to 1) automatically transforming original text into highly graph-structured data, 2) graph representation learning for complex graphs (e.g., multi-relational graphs, heterogeneous graphs), 3) learning the mapping between complex data structures (e.g., Graph2Seq, Graph2Tree, Graph2Graph).

This workshop aims to bring together both academic researchers and industrial practitioners from different backgrounds and perspectives to the above challenges. This workshop intends to share visions of investigating new approaches and methods at the intersection of graph machine learning and NLP. The workshop will consist of contributed talks, contributed posters, invited talks, and panelists on a wide variety of novel GNN methods and NLP applications.

Zoom link to the workshop: https://us06web.zoom.us/j/88116241775?pwd=bHdlSTFkMytGbWc0SkhVb01lcWkyZz09

... more
Workshop

Machine Learning for Drug Discovery (MLDD)

Pascal Notin · Stefan Bauer · Andrew Jesson · Yarin Gal · Patrick Schwab · Debora Marks · Sonali Parbhoo · Ece Ozkan · Clare Lyle · Ashkan Soleymani · Júlia Domingo · Arash Mehrjou · Melanie Fernandez Pradier · Anna Bauer-Mehren · Max Shen
1:00 PM - 9:30 PM

We are at a pivotal moment in healthcare characterized by unprecedented scientific and technological progress in recent years together with the promise of personalized medicine to radically transform the way we provide care to patients. However, drug discovery has become an increasingly challenging endeavour: not only has the success rate of developing new therapeutics been historically low, but this rate has been steadily declining. The average cost to bring a new drug to market is now estimated at 2.6 billion – 140% higher than a decade earlier. Machine learning-based approaches present a unique opportunity to address this challenge. While there has been growing interest and pioneering work in the machine learning (ML) community over the past decade, the specific challenges posed by drug discovery are largely unknown by the broader community. We would like to organize a workshop on ‘Machine Learning for Drug Discovery’ (MLDD) at ICLR 2022 with the ambition to federate the community interested in this application domain where i) ML can have a significant positive impact for the benefit of all and ii) the application domain can drive ML method development through novel problem settings, benchmarks and testing grounds at the intersection of many subfields ranging representation, active and reinforcement learning to causality and treatment effects.

... more
Workshop

3rd Workshop on practical ML for Developing Countries: learning under limited/low resource scenarios

Esube Bekele · Celia Cintas · Timnit Gebru · Judy Gichoya · Meareg Hailemariam · Waheeda Saib · Girmaw Abebe Tadesse
1:00 PM - 7:45 PM

The constant progress being made in artificial intelligence needs to extend across borders if we are to democratize AI in developing countries. Adapting the state-of-the-art (SOTA) methods to resource-constrained environments such as developing countries is challenging in practice. Recent breakthroughs in natural language processing (NLP), for instance, rely on increasingly complex and large models (e.g. most models based on transformers such as BERT, VilBERT, ALBERT, and GPT-2) that are pre-trained in on large corpus of unlabeled data. In most developing countries, low/limited resources mean a hard path towards the adoption of these breakthroughs. Methods such as transfer learning will not fully solve the problem either due to bias in pre-training datasets that do not reflect real test cases in developing countries as well as the prohibitive cost of fine-tuning these large models. Recent progress with focus given to ML for social good has the potential to alleviate the problem in part. However, the themes in such workshops are usually application-driven such as ML for healthcare and for education, and less attention is given to practical aspects as it relates to developing countries in implementing these solutions in low or limited resource scenarios. This, in turn, hinders the democratization of AI in developing countries. As a result, we aim to fill the gap by bringing together researchers, policymakers, and related stakeholders under the umbrella of practical ML for developing countries. The workshop is geared towards fostering collaborations and soliciting submissions under the broader theme of practical aspects of implementing machine learning (ML) solutions for problems in developing countries. We specifically encourage contributions that highlight challenges of learning under limited or low resource environments that are typical in developing countries.

... more
Workshop

Deep Generative Models for Highly Structured Data

Yuanqi Du · Adji Bousso Dieng · Yoon Kim · Rianne van den Berg · Yoshua Bengio
1:00 PM - 7:50 PM

Deep generative models are at the core of research in artificial intelligence, especially for unlabelled data. They have achieved remarkable performance in domains including computer vision, natural language processing, speech recognition, and audio synthesis. Very recently, deep generative models have been applied to broader domains, e.g. fields of science including the natural sciences, physics, chemistry and molecular biology, and medicine. However, deep generative models still face challenges when applied to these domains from which arise highly structured data. This workshop aims to bring experts from different backgrounds and perspectives to discuss the applications of deep generative models to these data modalities. The workshop will put an emphasis on challenges in encoding domain knowledge when learning representations, performing synthesis, or for prediction purposes. Since evaluation is essential for benchmarking, the workshop will also be a platform for discussing rigorous ways to evaluate representations and synthesis.

... more
Workshop

AI for Earth and Space Science

Natasha Dudek · Karianne Bergen · Stewart Jamieson · Valentin Tertius Bickel · Will Chapman · Johanna Hansen
3:00 PM - 11:35 PM

When will the San Andreas faultline next experience a massive earthquake? What can be done to reduce human exposure to zoonotic pathogens such as coronaviruses and schistosomiasis? How can robots be used to explore other planets in the search for extraterrestrial life? AI is posed to play a critical role in answering Earth and Space Sciences questions such as these, boosted by continually expanding, massive volumes of data from geo-scientific sensors, remote sensing data from satellites and space probes, and simulated data from high performance climate and weather simulations. The complexity of these datasets, however, poses an inherent challenge to AI, as they are often noisy, may contain time and/or geographic dependencies, and require substantial interdisciplinary expertise to collect and interpret.This workshop aims to highlight work being done at the intersection of AI and the Earth and Space Sciences, with a special focus on model interpretability at the ICLR 2022 iteration of the workshop (formerly held at ICLR 2020 and NeurIPS 2020). Notably, we do not focus on climate change as this specialized topic is addressed elsewhere and our scope is substantially broader. We showcase cutting-edge applications of machine learning to Earth and Space Science problems, including study of the atmosphere, biosphere (ecology), hydrosphere (water), lithosphere (solid Earth), sensors and sampling, and planetary science. We cultivate areas where Earth and planetary science is informing and inspiring new developments in AI, including theoretical developments in interpretable AI models, hybrid models with knowledge-guided AI, augmenting physics-based models with AI, representation learning from graphs and manifolds in spatiotemporal models, and dimensionality reduction. For example, the application of physics-informed AI to fluid dynamics is leading to major advances in weather forecasting, in turn inspiring exciting new hybrid model-based/model-free methods.

... more
Workshop

CoSubmitting Summer (CSS) Workshop

Rosanne Liu · Krystal Maughan · Thomas F Burns · Ching Lam Choi · Arun Raja
3:00 PM - 9:00 PM

A whole-day event celebrating and summarizing our progress on the "Broadening our Call for Participation to ICLR 2022" initiative. The goal of this workshop is to reflect, document, and celebrate projects initiated from the CSS initiative and plan our roads forward.

... more
Workshop

Generalizable Policy Learning in the Physical World

Young Min Kim · Sergey Levine · Ming Lin · Tongzhou Mu · Ashvin Nair · Hao Su
3:00 PM - 1:30 AM

While the study of generalization has played an essential role in many application domains of machine learning (e.g., image recognition and natural language processing), it did not receive the same amount of attention in common frameworks of policy learning (e.g., reinforcement learning and imitation learning) at the early stage for reasons such as policy optimization is difficult and benchmark datasets are not quite ready yet. Generalization is particularly important when learning policies to interact with the physical world. The spectrum of such policies is broad: the policies can be high-level, such as action plans that concern temporal dependencies and causalities of environment states; or low-level, such as object manipulation skills to transform objects that are rigid, articulated, soft, or even fluid.In the physical world, an embodied agent can face a number of changing factors such as \textbf{physical parameters, action spaces, tasks, visual appearances of the scenes, geometry and topology of the objects}, etc. And many important real-world tasks involving generalizable policy learning, e.g., visual navigation, object manipulation, and autonomous driving. Therefore, learning generalizable policies is crucial to developing intelligent embodied agents in the real world. Though important, the field is very much under-explored in a systematic way.Learning generalizable policies in the physical world requires deep synergistic efforts across fields of vision, learning, and robotics, and poses many interesting research problems. This workshop is designed to foster progress in generalizable policy learning, in particular, with a focus on the tasks in the physical world, such as visual navigation, object manipulation, and autonomous driving. We envision that the workshop will bring together interdisciplinary researchers from machine learning, computer vision, and robotics to discuss the current and future research on this topic.

... more
Workshop

PAIR^2Struct: Privacy, Accountability, Interpretability, Robustness, Reasoning on Structured Data

Hao Wang · Wanyu LIN · Hao He · Di Wang · Chengzhi Mao · Muhan Zhang
4:00 PM - 1:00 AM

In these years, we have seen principles and guidance relating to accountable and ethical use of artificial intelligence (AI) spring up around the globe. Specifically, Data Privacy, Accountability, Interpretability, {\bf R}obustness, and Reasoning have been broadly recognized as fundamental principles of using machine learning (ML) technologies on decision-critical and/or privacy-sensitive applications. On the other hand, in tremendous real-world applications, data itself can be well represented as various structured formalisms, such as graph-structured data (e.g., networks), grid-structured data (e.g., images), sequential data (e.g., text), etc. By exploiting the inherently structured knowledge, one can design plausible approaches to identify and use more relevant variables to make reliable decisions, thereby facilitating real-world deployments.In this workshop, we will examine the research progress towards accountable and ethical use of AI from diverse research communities, such as the ML community, security \& privacy community, and more. Specifically, we will focus on the limitations of existing notions on Privacy, Accountability, Interpretability, Robustness, and Reasoning. We aim to bring together researchers from various areas (e.g., ML, security \& privacy, computer vision, and healthcare) to facilitate discussions including related challenges, definitions, formalisms, and evaluation protocols regarding the accountable and ethical use of ML technologies in high-stake applications with structured data. In particular, we will discuss the interplay among the fundamental principles from theory to applications. We aim to identify new areas that call for additional research efforts. Additionally, we will seek possible solutions and associated interpretations from the notion of causation, which is an inherent property of systems. We hope that the proposed workshop is fruitful in building accountable and ethical use of AI systems in practice.

... more