Registration and Check-in are located in the lobby of the convention center near the Radisson entrance.
Joint IndabaXRwanda / BlackInAI
Joint IndabaX Rwanda 2023 / Black In AI Workshop
An IndabaX is a locally-organised Indaba (i.e gathering) that helps develop knowledge and capacity in machine learning and artificial intelligence in individual countries across Africa.
A Deep Learning IndabaX is a locally-organised Indaba that helps spread knowledge and builds capacity in machine learning.
Objectives & goals of IndabaX Rwanda 2023
Gather and connect the ML community in Rwanda with a core objective of “Strengthening Machine Learning and AI in Rwanda."
To strengthen a community around Artificial Intelligence(AI) and Machine Learning (ML) in Rwanda and facilitate learning:
Bring together all AI and ML practitioners of Rwanda.
Increase interest in AI and ML in Rwanda.
Connect students, start-ups, businesses and companies.
Increase the interest in research field in Rwanda
Kaggle@ICLR 2023: ML Solutions in Africa
Kaggle will be hosting its first workshop at The Eleventh International Conference on Learning Representations ICLR in Kigali, Rwanda on May 4, 2023. This workshop will be centered around using machine learning to help address societal challenges in Africa, and is uniquely aimed at early-to-mid career data scientists and ML researchers. It will also feature a live hackathon hosted in partnership with Zindi.
- Learn from talks by data scientists and researchers doing impactful ML work on prevalent issues in Africa.
- Participate in a live competition (in-person only). Top-ranked finishers in the competition will earn prizes!
- Connect with other early-to-mid-career data scientists and network with advanced ML researchers.
- Meet the Kaggle & Zindi teams: Julia Elliott (Kaggle COO), Walter Reade (Kaggle Competitions Data Scientist), and Amy Bray (Zindi Africa Data Scientist) will be facilitating the workshop in partnership with Zindi and hosting the competition.
Trustworthy and Reliable Large-Scale Machine Learning Models
In recent years, the landscape of AI has been significantly altered by the advances in large-scale pre-trained models. Scaling up the models with more data and parameters has significantly improved performance and achieved great success in various applications, from natural language understanding to multi-modal representation learning. However, when applying large-scale AI models to real-world applications, there have been concerns about their potential security, privacy, fairness, robustness, and ethics issues. In the wrong hands, machine learning could be used to negatively impact mission-critical domains, including healthcare, education, and law, resulting in economic and environmental consequences and legal and ethical concerns. For example, existing studies have shown that large-scale pre-trained language models contain toxicity in open-ended generation and have the risk of amplifying bias against marginalized groups, such as BIPOC and LGBTQ+. Moreover, large-scale models can unintentionally leak sensitive personal information during the pre-training stage. Last but not least, machine learning models are often viewed as "blackboxes" and may produce unpredictable, inaccurate, and unexplainable results, especially under domain shifts or maliciously tailored attacks. To address these negative societal impacts in large-scale models, researchers have investigated different approaches and principles to ensure robust and trustworthy large-scale AI systems. This workshop is to bridge the gap between security, privacy, fairness, ethics, and large-scale AI models and aims to discuss the principles and experiences of developing robust and trustworthy large-scale AI systems. We attempt to highlight recent related work from different communities, clarify the foundations of trustworthy machine learning, and chart out important directions for future work and cross-community collaborations.
Physics for Machine Learning
Combining physics with machine learning is a rapidly growing field of research. Thus far, most of the work in this area focuses on leveraging recent advances in classical machine learning to solve problems that arise in the physical sciences. In this workshop, we wish to focus on a slightly less established topic, which is the converse: exploiting structures (or symmetries) of physical systems as well as insights developed in physics to construct novel machine learning methods and gain a better understanding of such methods. A particular focus will be on the synergy between the scientific problems and machine learning and incorporating structure of these problems into the machine learning methods which are used in that context. However, the scope of application of those models is not limited to problems in the physical sciences and can be applied even more broadly to standard machine learning problems, e.g. in computer vision, natural language processing or speech recognition.
Trustworthy Machine Learning for Healthcare
Machine learning (ML) has achieved or even exceeded human performance in many healthcare tasks, owing to the fast development of ML techniques and the growing scale of medical data. However, ML techniques are still far from being widely applied in practice. Real-world scenarios are far more complex, and ML is often faced with challenges in its trustworthiness such as lack of explainability, generalization, fairness, privacy, etc. Improving the credibility of machine learning is hence of great importance to enhance the trust and confidence of doctors and patients in using the related techniques. We aim to bring together researchers from interdisciplinary fields, including but not limited to machine learning, clinical research, and medical imaging, etc., to provide different perspectives on how to develop trustworthy ML algorithms to accelerate the landing of ML in healthcare.
AI for Agent-Based Modelling (AI4ABM)
Many of the world's most pressing issues, such as climate change, pandemics, financial market stability and fake news, are emergent phenomena that result from the interaction between a large number of strategic or learning agents. Understanding these systems is thus a crucial frontier for scientific and technology development that has the potential to permanently improve the safety and living standards of humanity. Agent-Based Modelling (ABM) (also known as individual-based modelling) is an approach toward creating simulations of these types of complex systems by explicitly modelling the actions and interactions of the individual agents contained within. However, current methodologies for calibrating and validating ABMs rely on human expert domain knowledge and hand-coded behaviours for individual agents and environment dynamics.Recent progress in AI has the potential to offer exciting new approaches to learning, calibrating, validation, analysing and accelerating ABMs. This interdisciplinary workshop is meant to bring together practitioners and theorists to boost ABM method development in AI, and stimulate novel applications across disciplinary boundaries and continents - making ICLR the ideal venue.Our inaugural workshop will be organised along two axes. First, we seek to provide a venue where ABM researchers from a variety of domains can introduce AI researchers to their respective domain problems. To this end, we are inviting a number of high-profile speakers across various application domains. Second, we seek to stimulate research into AI methods that can scale to large-scale agent-based models with the potential to redefine our capabilities of creating, calibrating, and validating such models. These methods include, but are not limited to, simulation-based inference, multi-agent learning, causal inference and discovery, program synthesis, and the development of domain-specific languages and tools that allow for tight integration of ABMs and AI approaches.
Reincarnating Reinforcement Learning
Learning “tabula rasa”, that is, from scratch without much previously learned knowledge, is the dominant paradigm in reinforcement learning (RL) research. However, learning tabula rasa is the exception rather than the norm for solving larger-scale problems. Additionally, the inefficiency of tabula rasa RL typically excludes the majority of researchers outside certain resource-rich labs from tackling computationally demanding problems. To address the inefficiencies of tabula rasa RL and help unlock the full potential of deep RL, our workshop aims to bring further attention to this emerging paradigm of reusing prior computation in RL, discuss potential benefits and real-world applications, discuss its current limitations and challenges, and come up with concrete problem statements and evaluation protocols for the research community to work on. Furthermore, we hope to foster discussions via panel discussions (with audience participation), several contributed talks and by welcoming short opinion papers in our call for papers.
Tackling Climate Change with Machine Learning: Global Perspectives and Local Challenges
Climate change is one of the greatest problems society has ever faced, with increasingly severe consequences for humanity as natural disasters multiply, sea levels rise, and ecosystems falter. While climate change is a truly global problem, it manifests itself via many local effects, which pose unique problems and require corresponding actions. These actions can take many forms, from designing smart electric grids to tracking greenhouse gas emissions through satellite imagery. While no silver bullet, machine learning can be an invaluable tool in fighting climate change via a wide array of applications and techniques. These applications require algorithmic innovations in machine learning and close collaboration with diverse fields and practitioners. This workshop is intended as a forum for those in the global machine learning community who wish to help tackle climate change, and is further aimed to help foster cross-pollination between researchers in machine learning and experts in complementary climate-relevant fields. Building on our past workshops on this topic, this workshop particularly aims to explore the connection between global perspectives and local challenges in the context of applying machine learning towards tackling climate change. We want to take the opportunity of the first leading machine learning conference being hosted in person in a non-Western country to shine a light on work that deploys, analyzes or critiques ML methods and their use for climate change adaptation and mitigation in low-income countries.
Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)
Foundation models (FMs) are models that are trained on a large and diverse pool of data and can be adapted to a wide range of tasks. Recent examples of FMs include large language models (GPT-3, BERT, PaLM), image representation encoders (SimCLR), and image-text models (CLIP, DALL-E), which have all revolutionized the way models are built in their domains. Foundation models are poorly understood: the core driving principle behind Foundation Models (FMs) is transfer learning, but scale and modern self supervision techniques have led to emergent capabilities we might not have anticipated. The goal of this workshop is to highlight research that aims to improve our understanding of FMs. We liberally interpret understanding as any research ranging from purely empirical papers that highlight interesting phenomena, to those which attempt to explain or provide theoretical foundations for such phenomena in potentially simplified settings.
Neural Fields across Fields: Methods and Applications of Implicit Neural Representations
Addressing problems in different science and engineering disciplines often requires solving optimization problems, including via machine learning from large training data. One class of methods has recently gained significant attention for problems in computer vision and visual computing: coordinate-based neural networks parameterizing a field, such as a neural network that maps a 3D spatial coordinate to a flow field in fluid dynamics, or a colour and density field in 3D scene representation. Such networks are often referred to as "neural fields". The application of neural fields in visual computing has led to remarkable progress on various computer vision problems such as 3D scene reconstruction and generative modelling, leading to more accurate, higher fidelity, more expressive, and computationally cheaper solutions. Given that neural fields can represent spatio-temporal signals in arbitrary input/output dimensions, they are highly general as a tool to reason about real-world observations, be it common modalities in machine learning and vision such as image, 3D shapes, 3D scenes, video, speech/audio or more specialized modalities such as flow fields in physics, scenes in robotics, medical images in computational biology, weather data in climate science. However, though some adjacent fields such as robotics have recently seen an increased interest in this area, most of the current research is still confined to visual computing, and the application of neural fields in other fields is in its early stages. We thus propose a workshop that aims to bring together researchers from a diverse set of backgrounds including machine learning, computer vision, robotics, applied mathematics, physics, chemistry, biology and climate science to exchange ideas and expand the domains of application of neural fields.
Neurosymbolic Generative Models (NeSy-GeMs)
The Neurosymbolic Generative Models (NeSy-GeMs) workshop at ICLR 2023 aims to bridge the Neurosymbolic AI and Generative Modeling communities, bringing together machine learning, neurosymbolic programming, knowledge representation and reasoning, tractable probabilistic modeling, probabilistic programming, and application researchers to discuss new research directions and define novel open challenges.
What do we need for successful domain generalization?
The real challenge for any machine learning system is to be reliable and robust in any situation, even if it is different compared to training conditions. Existing general purpose approaches to domain generalization (DG)—a problem setting that challenges a model to generalize well to data outside the distribution sampled at training time—have failed to consistently outperform standard empirical risk minimization baselines. In this workshop, we aim to work towards answering a single question: what do we need for successful domain generalization? We conjecture that additional information of some form is required for a general purpose learning methods to be successful in the DG setting. The purpose of this workshop is to identify possible sources of such information, and demonstrate how these extra sources of data can be leveraged to construct models that are robust to distribution shift. Examples areas of interest include using meta-data associated with each domain, examining how multimodal learning can enable robustness to distribution shift, and flexible frameworks for exploiting properties of the data that are known to be invariant to distribution shift.
From Molecules to Materials: ICLR 2023 Workshop on Machine learning for materials (ML4Materials)
The discovery of new materials drives the development of key technologies like solar cells, batteries, carbon capture, and catalysis. While there has been growing interest in materials discovery with machine learning, the specific modeling challenges posed by materials have been largely unknown to the broader community. Compared with drug-like molecules and proteins, the modeling of materials has the following two major challenges. First, materials-specific inductive biases are needed to develop successful ML models. For example, materials often don’t have a handy representation like 2D graphs for molecules or sequences for proteins. Second, there exists a broad range of interesting materials classes, such as inorganic crystals, polymers, catalytic surfaces, nanoporous materials, and more. Each class of materials demands a different approach to represent their structures, and new tasks/data sets to enable rapid ML developments.This workshop aims at bringing together the community to discuss and tackle these two types of challenges. In the first session, we will feature speakers to discuss the latest progress in developing ML models for materials focusing on algorithmic challenges, covering topics like representation learning, generative models, pre-training, etc. In particular, what can we learn from the more developed field of ML for molecules and 3D geometry and where might challenges differ and opportunities for novel developments lie? In the second session, we will feature speakers to discuss unique challenges for each sub-field of materials design and how to define meaningful tasks that are relevant to the domain, covering areas including inorganic materials, polymers, nanoporous materials, catalysis, etc. More specifically, what are the key materials design problems that ML can help tackle?