We aim to create a venue where we discuss seemingly contrasting challenges in machine learning research and their consequences. We invite researchers to discuss the boundaries between science and engineering, the implications of having blurred boundaries, and their potential consequences in areas of life beyond research.
We organized the first ``Science meets Engineering in Deep Learning'' workshop at NeurIPS 2019, which aimed to identify the potential boundaries between science and engineering and the role of theoretically driven and application-driven research in deep learning. The workshop's discussions highlighted how intertwined science and engineering are and emphasized the benefits of their symbiotic relationship to push the boundaries of both theoretically driven and application-driven research. To highlight the communication channel we aimed to build, we chose "Science meets Engineering'' in the title for the first iteration of the workshop.
Since then, such boundaries appear harder and harder to draw, and it becomes increasingly clear that we need to agree on a set of values that define us as a community, and that will shape our future research. In particular, we envision that such values will help (1) emphasize important engineering and scientific practices that we should foster to increase the robustness of our …
Data compression is a problem of great practical importance, and a new frontier for machine learning research that combines empirical findings (from the deep probabilistic modeling literature) with fundamental theoretical insights (from information theory, source coding, and minimum description length theory). Recent work building on deep generative models such as variational autoencoders, GANs, and normalizing flows showed that novel machine-learning-based compression methods can significantly outperform state-of-the-art classical compression codecs for image and video data. At the same time, these neural compression methods provide new evaluation metrics for model and inference performance on a rate/distortion trade-off. This workshop aims to draw more attention to the young and highly impactful field of neural compression. In contrast to other workshops that focus on practical compression performance, our goal is to bring together researchers from deep learning, information theory, and probabilistic modeling, to learn from each other and to encourage exchange on fundamentally novel issues such as the role of stochasticity in compression algorithms or ethical risks of semantic compression artifacts.
To reach top-tier performance, deep learning architectures usually rely on a large number of parameters and operations, and thus require to be processed using considerable power and memory. Numerous works have proposed to tackle this problem using quantization of parameters, pruning, clustering of parameters, decompositions of convolutions, or using distillation. However, most of these works aim at accelerating only the inference process and disregard the training phase. In practice, however, it is the learning phase that is by far the most complex. There has been recent efforts in introducing some compression on the training process, however, it remains challenging. In this workshop, we propose to focus on reducing the complexity of the training process. Our aim is to gather researchers interested in reducing energy, time, or memory usage for faster/cheaper/greener prototyping or deployment of deep learning models. Due to the dependence of deep learning on large computational capacities, the outcomes of the workshop could benefit all who deploy these solutions, including those who are not hardware specialists. Moreover, it would contribute to making deep learning more accessible to small businesses and small laboratories. Indeed, training complexity is of interest to many distinct communities. A first example is training on edge …
Over the past two decades, high-throughput data collection technologies have become commonplace in most fields of science and technology, and with them an ever-increasing amount of big high dimensional data is being generated by virtually every real-world system. While such data systems are highly diverse in nature, the underlying data analysis and exploration task give rise to common challenges at the core of modern representation learning. For example, even though modern real-world data typically have high dimensional ambient measurement spaces, they often exhibit low dimensional intrinsic structures that can be uncovered by geometry-oriented methods, such as the ones encountered in manifold learning, graph signal processing, geometric deep learning, and topological data analysis. As a result, recent years have seen significant interest and progress in geometric and topological approaches to representation learning,whichenabletractableexploratoryanalysisbydomainexpertswhoareoftennotcomputationoriented. Our overarching goal in the proposed workshop is to deepen our understanding of the challenges and opportunities in this field, while breaking the barriers between the typically disjoint computational approaches (or communities) that work in this field, with emphasis on the domains of topological data analysis, graph representation learning, and manifold learning, on which we shall subsequently briefly comment.
Website: https://gt-rl.github.io/
Data coupled with the right algorithms offers the potential to save lives, protect the environment and increase profitability in different applications and domains. This potential, however, can be severely inhibited by adverse data properties specifically resulting in poor model performance, failed projects, and potentially serious social implications. This workshop will examine representation learning in the context of limited and sparse training samples, class imbalance, long-tailed distributions, rare cases and classes, and outliers. Speakers and participants will discuss the challenges and risks associated with designing, developing and learning deep representations from data with adverse properties. In addition, the workshop aims to connect researchers devoted to these topics in the traditional shallow representation learning research community and the more recent deep learning community, in order to advance novel and holistic solutions. Critically, given the growth in the application of AI to real-world decision making, the workshop will also facilitate a discussion of the potential social issues associated with application of deep representation learning in the context of data adversity. The workshop will bring together theoretical and applied deep learning researchers from academia and industry, and lay the groundwork for fruitful research collaborations that span communities that are often siloed.
Over the last decade, the volume of conference submissions in machine learning has broken records. Despite rapid advancements and increasing hype around AI, there is growing concern in the ML community about where the field is headed. The current pandemic gives researchers a long-awaited opportunity to pause and reflect: what kind of legacy do we want to leave behind? How are scientific results presented? How do we interpret and explain them? Does this process include and/or allow access to all stakeholders? Are the results reproducible? These are some of the many facets of effective scientific communication which will shape the next decade of ML research.
How much research is overlooked due to inaccessible communication? How many papers will be as readable in ten or twenty years? How can we make the proceedings more accessible for future generations of ML researchers? These are a few of the questions we plan to discuss in our workshop. We hope to instigate an exciting discussion on redesigning the scientific paper for the next few years of machine learning research!
Reinforcement learning entails letting an agent learn through interaction with an environment. The formalism is powerful in it’s generality, and presents us with a hard open-ended problem: how can we design agents that learn efficiently, and generalize well, given only sensory information and a scalar reward signal? The goal of this workshop is to explore the role of self-supervised learning within reinforcement learning agents, to make progress towards this goal.
Energy-Based Models (EBMs) are a learning framework that assigns a quality score to any given input, its energy; contrary to
probabilistic models, there is no a priori requirement that these scores be normalized (i.e. sum to one). Energies are typically
computed through a neural network, and training an EBM corresponds to shaping the energy function such that data points nearby the underlying data manifold are associated with lower energies than data points that are far from it. Not imposing normalization affords a great power and flexibility to the modelling process, e.g. in terms of combining energies, on conditioning on certain variables, of computing global scores on complex structured objects, or on expressing prior
knowledge. However, this freedom comes with significant technical challenges, in terms of learning and inference.
A strong comeback of EBMs is currently underway. This ICLR-2021 Workshop is the opportunity to increase awareness about the diversity of works in this area, to discuss current challenges, and to encourage cross-pollination between different communities around this topic.
The COVID-19 pandemic has cast a spotlight on the importance of public health. Even beyond this current emergency, public health is an essential component of population-level wellbeing. Topics such as infectious disease surveillance and control, preventative health, behavioral and mental health, maternal and child wellbeing, and more all play a crucial role in society. Moreover, a range of applications in public health benefit from careful use of data to uncover outbreak dynamics, learn patterns of behavior, optimize the design of interventions, and more. The science of machine learning in a public health context is still rapidly developing, and our aim is to build a community encompassing researchers based in both machine learning and public health to address these shared questions.
In this workshop, we focus on a particular kind of reasoning ability, namely, mathematical reasoning. Advanced mathematical reasoning is unique in human intelligence, and it is also a fundamental building block for many intellectual pursuits and scientific developments. We believe that addressing this problem has the potential to shed light on a path towards general reasoning mechanisms, and hence general artificial intelligence. Therefore, we would like to bring together a group of experts from various backgrounds to discuss the role of mathematical reasoning ability towards the path of demonstrating general artificial intelligence. In addition, we hope to identify missing elements and major bottlenecks towards demonstrating mathematical reasoning ability in AI systems.
Humans have a remarkable ability to continually learn and adapt to new scenarios over the duration of their lifetime (Smith & Gasser, 2005). This ability is referred to as never ending learning, also known as continual learning or lifelong learning. Never-ending learning is the constant development of increasingly complex behaviors and the process of building complicated skills on top of those already developed (Ring, 1997), while being able to reapply, adapt and generalize its abilities to new situations. A never-ending learner has the following desiderata
1) it learns behaviors and skills while solving its tasks
2) it invents new subtasks that may later serve as stepping stones
3) it learns hierarchically, i.e. skills learned now can be built upon later
4) it learns without ergodic or resetting assumptions on the underlying (PO)MDP
5) it learns without episode boundaries
6) it learns in a single life without leveraging multiple episodes of experience
There are several facets to building AI agents with never-ending learning abilities. Moreover, different fields have a variety of perspectives to achieving this goal. To this end, we identify key themes for our workshop including cognitive sciences, developmental robotics, agency and abstractions, open-ended learning, world modelling and active inference.
Neural Architecture Search (NAS) is an exciting new field of study that is taking representation learning to the next level by allowing us to learn the architectures in a data-driven way that then enables efficient learning of representations. While representation learning removed the need of manual feature engineering, it shifted the manual task to the manual selection of architectures; as a natural next step, NAS replaces this manual architecture selection step, allowing us true end-to-end learning of the architecture, the features, and the final classifier using the features expressed as instantiations of the architecture.
Since the first workshop on NAS at ICLR 2020, there have been many new developments in NAS. Firstly, there has been a large increase in standardized tabular benchmarks and more researchers releasing source code, leading to more rigorous empirical NAS research and also allowing research groups without access to industry-scale compute resources to run thorough experimental evaluations. Secondly, there are now several works aiming for standardized and modularized open-source libraries that allow for both clean evaluations of different approaches without confounding factors and for mixing and matching components of different NAS methods. Finally, by now there are also several applications of NAS beyond its original narrow …
Oceans play a key role in the biosphere, regulating the carbon cycle; absorbing emitted CO2 through the biological pump, and a large part of the heat that the remaining CO2 and other greenhouse gases retained in the atmosphere. Understanding the drivers of micro and macroorganisms in the ocean is of paramount importance to understand the functioning of ecosystems and the efficiency of the biological pump in sequestering carbon and thus abating climate change.
AI, ML, and mathematical modeling tools are key to understanding oceans and climate change. Consequently, the topics of interest of this workshop can be grouped into two sets.
In regard to AI and modeling, the main focus is set on:
- handling of graph-structured information,
- ML methods to learn in small data contexts,
- causal relations, interpretability, and explainability in AI,
- integrating model-driven and data-driven approaches, and
- to develop, calibrate, and validate existing mechanistic models.
In the domain application area, the main questions to be addressed are:
- Which are the major patterns in plankton taxa and functional diversity?
- How these patterns and drivers will likely change under climate change?
- How will changes affect the capacity of ocean ecosystems to sequester carbon …
The brain comprises billions of neurons organized into an intricate network of highly specialized functional areas. This biological cognitive system can efficiently process vast amounts of multi-modal data to perceive and react to its ever-changing environment. Unlike current AI systems, it does not struggle with domain adaptation, few-shot learning, or common-sense reasoning. Inspiration from neuroscience has benefited AI in the past: dopamine reward signals inspired TD learning, modern convolutional networks mimic the deep, nested information flow in visual cortex, and hippocampal replay of previous experiences has brought about experience replay in reinforcement learning. Recent work at the intersection of neuroscience and AI has made progress in directly integrating neuroscientific data with AI systems and has led to learned representations that are more robust to label corruptions, allow for better generalization in some language tasks, and provide new ways to interpret and evaluate what domain-relevant information is learned by deep neural networks. In this workshop, we aim to examine the extent to which insights about the brain can lead to better AI.
Artificial Intelligence and Machine Learning are increasingly employed by industry and government alike to make or inform high-stakes decisions for people in areas such as employment, credit lending, policing, criminal justice, healthcare, and beyond. Over the past several years, we have witnessed growing concern regarding the risks and unintended consequences of inscrutable ML techniques (in particular, deep learning) in such socially consequential domains. This realization has motivated the community to look closer at the societal impacts of automated decision making and develop tools to ensure the responsible use of AI in society. Chief among the ideals that the ML community has set out to formalize and ensure are safety, interpretability, robustness, and fairness. In this workshop, we examine the community’s progress toward these values and aim to identify areas that call for additional research efforts. In particular, by bringing researchers with diverse backgrounds, we will focus on the limitations of existing formulations of fairness, explainability, robustness and safety, and discuss the tradeoffs among them.
Our workshop will consist of a diverse set of speakers (ranging from researchers with social work background to researchers in the ML community) to discuss transparency, bias and inequity in various real-world problems, including but not …
Deep Neural Networks (DNNs) are the leading approach for nearly all domains of machine learning and computer vision, with performance at times rivaling human perception. However, there is consensus that these models are outmatched by the robustness and versatility of biological brains. DNNs are sensitive to so-called shifts of the training distribution, where systematic differences between the train and test sets can significantly degrade performance. Distributional shifts can be induced by random or structured (adversarial) perturbations, changes in object or scene viewpoint, illumination, or color, and novel compositions of familiar features. These issues are magnified in domains where training data is scarce. In contrast, flexible and efficient generalization is a hallmark of biological perception and intelligence. We believe that the algorithms implemented in biological brains offer clues for how to construct artificial intelligence that can generalize beyond the training distribution.
The limited generalization of neural networks is a critical problem for artificial intelligence, in applications ranging from automated driving and biomedical image analysis, and domains like reinforcement learning, control, and representational theory. Our goal is to address these issues by creating synergies among neuroscientists, cognitive scientists, and artificial intelligence researchers that might lead to novel solutions to this problem or …
Deep learning relies on massive training sets of labeled examples to learn from - often tens of thousands to millions to reach peak predictive performance. However, large amounts of training data are only available for very few standardized learning problems. Even small variations of the problem specification or changes in the data distribution would necessitate re-annotation of large amounts of data.
However, domain knowledge can often be expressed by sets of prototypical descriptions. These knowledge-based descriptions can be either used as rule-based predictors or as labeling functions for providing partial data annotations. The growing field of weak supervision provides methods for refining and generalizing such heuristic-based annotations in interaction with deep neural networks and large amounts of unannotated data.
In this workshop, we want to advance theory, methods and tools for allowing experts to express prior coded knowledge for automatic data annotations that can be used to train arbitrary deep neural networks for prediction. Learning with weak supervision is both studied from a theoretical perspective as well as applied to a variety of tasks from areas like natural language processing and computer vision. This workshop aims at bringing together researchers from this wide range of fields to facilitate discussions across …
Every day, millions of people use natural language interfaces in virtual digital assistants such as Amazon Alexa, Apple’s Siri, Google, Microsoft Cortana, Samsung’s Bixby and Facebook Potal via in-home devices or phones. At the same time, interest among the NLP research community in conversational systems has blossomed to the extent that Dialogue and Interactive Systems is consistently among the top three tracks in NLP conferences receiving a record number of submissions. Today’s industrial conversational AI systems are built using the traditional NLP pipeline, i.e., natural language understanding, dialog state tracking, dialog policy, and natural language generation. Despite its success, this pipeline fundamentally limits performance, humanness, and scaling of conversational AI systems. To overcome these challenges, dialog researchers have started embracing end-to-end neural approaches for the next generation of conversational AI systems, as such approaches have been setting state-of-the-art performance records on several NLP tasks. However, Neural Conversational AI systems are still far from shippable in the real world. We identify the following main outstanding questions to bridge this gap:
- Grounding in external systems
- Safety/integrity/robustness
- Continual learning
The goal of this workshop is to bring together machine learning researchers and dialog researchers from academia and industry to encourage …
Data are the most valuable ingredient of machine learning models to help researchers and companies make informed decisions. However, access to rich, diverse, and clean datasets may not always be possible. One of the reasons for the lack of rich datasets is the substantial amount of time needed for data collection, especially when manual annotation is required. Another reason is the need for protecting privacy, whenever raw data contains sensitive information about individuals and hence cannot be shared directly. A powerful solution that can address both of these challenging scenarios is generating synthetic data. Thanks to the recent advances in generative models, it is possible to create realistic synthetic samples that closely match the distribution of complex, real data. In the case of limited labeled data, synthetic data can be used to augment training data to mitigate overfitting. In the case of protecting privacy, data curators can share the synthetic data instead of the original data, where the utility of the original data is preserved but privacy is protected. Despite the substantial benefits from using synthetic data, the process of synthetic data generation is still an ongoing technical challenge. Although the two scenarios of limited data and privacy concerns share …
The constant progress being made in artificial intelligence needs to extend across borders if we are to democratize AI in developing countries. Adapting the state-of-the-art (SOTA) methods to resource constrained environments such as developing countries is challenging in practice. Recent breakthroughs in natural language processing (NLP), for instance, rely on increasingly complex and large models (e.g. most models based on transformers such as BERT, VilBERT, ALBERT, and GPT-2) that are pre-trained in on large corpus of unlabeled data. In most developing countries, low/limited resources means hard path towards adoption of these breakthroughs. Methods such as transfer learning will not fully solve the problem either due to bias in pre-training datasets that do not reflect real test cases in developing countries as well as the prohibitive cost of fine-tuning these large models. Recent progress with focus given to ML for social good has the potential to alleviate the problem in part. However, the themes in such workshops are usually application driven such as ML for healthcare and for education, and less attention is given to practical aspects as it relates to developing countries in implementing these solutions in low or limited resource scenarios. This, in turn, hinders the democratization of AI …
Recent years have seen a lot of interest in the use and development of learning-to-learn algorithms. Research on learning-to-learn, or meta-learning, algorithms is often motivated by the hope to learn representations that can be easily transferred to the learning of new skills, and lead to faster learning. Yet, current meta-learned representations often struggle to generalize to novel task settings. In this workshop, we’d like to discuss how humans meta-learn, and what we can and should expect from learning-to-learn in the field of machine learning. Our aim is to bring together researchers from a variety of backgrounds with the hope to discuss and reason about what learning to learn means from a cognitive perspective, and how this knowledge might translate into algorithmic advances. In particular we are interested in creating a platform to enable the exchange between the fields of neuroscience and machine learning.
We believe that it is an important moment for the machine learning community to reflect upon these questions in order to advance the field and increase its variety in approaching learning to learn. We hope that by fostering discussions between cognitive science and machine learning researchers, we enable both sides to draw inspiration to further the understanding …
Language models that have been trained on unlabeled text data are a cornerstone of modern natural language processing (NLP) research, and many recent state-of-the-art results in NLP were achieved by leveraging these self-supervised models. The success of this recipe is largely thanks to scalability: Better results can often be obtained by training larger models on larger amounts of unlabeled text data. This places our field at a crossroads. Will scaling lead to models that outperform humans on all text-based tasks, or are there limits to the scalability of these models? Should we focus on simply scaling these models, or should we design more sophisticated architectures and training schemes? Do our current benchmark effectively test capabilities that humans can master but large language models lack? How can we address the legal and ethical issues that arise from using unstructured web crawls for training language models? What can we learn from the fields of cognition, linguistics, and philosophy as we attempt to measure the “intelligence” of machines? The goal of this workshop is to find answers to these questions by inviting a diverse group of researchers to critically examine the state of giant language models.
This workshop will have a non-standard submission …
Despite encouraging progress in embodied learning over the past two decades, there is still a large gap between embodied agents' perception and human perception. Humans have remarkable capabilities combining all our multisensory inputs. To close the gap, embodied agents should also be enabled to see, hear, touch, and interact with their surroundings in order to select the appropriate actions. However, today's learning algorithms primarily operate on a single modality. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals jointly. The goal of this workshop is to share recent progress and discuss current challenges on embodied learning with multiple modalities.
The EML workshop will bring together researchers in different subareas of embodied multimodal learning including computer vision, robotics, machine learning, natural language processing, and cognitive science to examine the challenges and opportunities emerging from the design of embodied agents that unify their multisensory inputs. We will review the current state and identify the research infrastructure needed to enable a stronger collaboration between researchers working on different modalities.
As machine learning (ML) is deployed pervasively, there is an increasing demand for ML systems to behave reliably when the input to the system has changed. Much work has emerged regarding artificial and natural changes to data, with a growing interest towards studying robustness and reliability of ML systems in the presence of real-world changes. This shift towards more realistic considerations raises both old and new fundamental questions for machine learning:
1. Can we bring principled research in robustness closer to real-world effects?
2. How can we demonstrate the reliability of ML systems in real-world deployments?
3. What are the unique societal and legal challenges facing robustness for deployed ML systems?
Consequently, the goal of this workshop is to bring together research in robust machine learning with the demands and reliability constraints of real-world processes and systems, with a focus on the practical, theoretical, and societal challenges in bringing these approaches to real world-scenarios. We highlight emerging directions, paradigms, and applications which include 1. Characterizing real-world changes for robustness; 2. Reliability of real-world systems; 3. Societal and legal considerations.
Over the last decade, progress in machine learning has resulted in a surge of data-driven services affecting our daily lives. Conversational agents, healthcare providers, online retailers, and social networks continually access and jointly process vast amounts of data about their geographically distributed customers. Progress in distributed machine learning technology which has enabled widespread adoption and personalization has also raised issues regarding privacy, accountability, and fairness. This tension is particularly apparent in the context of the Covid-19 pandemic. This motivates the need to jointly address distributed and private machine learning technologies.
Recently there has been a surge in interest in using deep learning to facilitate simulation, in application areas including physics, chemistry, robotics and graphics.
We define simulation as the process of iteratively generating output of the next time step using the output of the previous time step as input starting from an initial condition. The primary motivation of the workshop is thus to encourage knowledge sharing and communication. Recent works have started to actively explore the potential of using deep learning to improve these highly important simulations in terms of accuracy and efficiency. We believe that this workshop will bring these communities together, create communication and collaboration, in order to speed-up research on this important topic.
While machine learning (ML) models have achieved great success in many applications, concerns have been raised about their potential vulnerabilities and risks when applied to safety-critical applications. On the one hand, from the security perspective, studies have been conducted to explore worst-case attacks against ML models and therefore inspire both empirical and certifiable defense approaches. On the other hand, from the safety perspective, researchers have looked into safe constraints, which should be satisfied by safe AI systems (e.g. autonomous driving vehicles should not hit pedestrians). This workshop makes the first attempts towards bridging the gap of these two communities and aims to discuss principles of developing secure and safe ML systems. The workshop also focuses on how future practitioners should prepare themselves for reducing the risks of unintended behaviors of sophisticated ML models.
The workshop will bring together experts from machine learning, computer security, and AI safety communities. We attempt to highlight recent related work from different communities, clarify the foundations of secure and safe ML, and chart out important directions for future work and cross-community collaborations.
Pandemics are major disasters in human history. The recent COVID-19 pandemic has caused about 0.52 million deaths and infected about 11 million people all over the world as of July 3. In the past two decades, several pandemics/ epidemics including Zika, SARS, Ebola, H1N1 Flu, etc. have killed a large number of people. Medical experts predict that future pandemics will periodically occur and may be even worse than past ones. Since the outbreak of COVID-19, AI researchers have been developing methods to combat this pandemic, including building forecasting models to predict the spread of coronavirus, developing computer vision methods to analyze CT scans and chest X-rays for screening and risk assessment of infected cases, leveraging computational biology methods for vaccine development, etc. These efforts have shown high utility in controlling the spread of COVID-19 and pave a promising way for preventing future pandemics. To further promote research on AI-based control of pandemics, we aim to organize a workshop which brings together researchers in machine learning, healthcare, medicine, public health, etc. and facilitates discussions and collaborations in developing machine learning and AI methods to diagnose and treat infectious diseases and prevent and contain pandemics. Different from previous healthcare-related workshops, our workshop …