The Ninth International Conference on Learning Representations (Virtual Only)
Tue May 4th through Sat the 8th

General Chair

  • Shakir Mohamed, DeepMind

Senior Program Chair

  • Katja Hofmann, Microsoft

Program Chairs

  • Alice Oh, KAIST
  • Naila Murray, Facebook AI Research
  • Ivan Titov, U Edinburgh / U Amsterdam

Workshop Chairs

  • Sanmi Koyejo, U Illinois UC
  • Chelsea Finn, Stanford

Area Chairs

Equity Diversity & Inclusion Chairs

  • Jane Wang, Google
  • Emtiyaz Khan, RIKEN AIP

Virtual Chairs

  • TBA
  • TBA

Socials Chair

  • TBA

Engagements Chair - Press, Sponsors

  • Viktoriia Sharmanska - Imperial
  • TBA

Contact

The organizers can be contacted here.

Sponsors

The generous support of our sponsors allowed us to reduce our ticket price by about 50%, and support diverisy at the meeting with travel awards. In addition, many accepted papers at the conference were contributed by our sponors.

View ICLR 2021 sponsors » Become a 2021 Sponsor » (sponsor application is not yet available)

Important Dates

Conference Sessions Tue May 4th through Fri the 7th
Workshops Sat May 8th
Workshop Application Open Sep 11 08:00 AM PDT *
Abstract Submission Deadline Sep 28 08:00 AM PDT *
Paper Submission deadline Oct 02 08:00 AM PDT *
Workshop Application Close Nov 09 10:00 PM PST *
Paper Decision Notification Jan 12 (Anywhere on Earth)
All dates » * Dates above are in pacific time

About Us

The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.

ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

A non-exhaustive list of relevant topics explored at the conference include:

  • unsupervised, semi-supervised, and supervised representation learning
  • representation learning for planning and reinforcement learning
  • representation learning for computer vision and natural language processing
  • metric learning and kernel learning
  • sparse coding and dimensionality expansion
  • hierarchical models
  • optimization for representation learning
  • learning representations of outputs or states
  • implementation issues, parallelization, software platforms, hardware
  • applications in audio, speech, robotics, neuroscience, computational biology, or any other field
  • societal considerations of representation learning including fairness, safety, privacy