The Twelfth International Conference on Learning Representations

Vienna, Austria
May 7th, 2024 to May 11th, 2024


Registration

Pricing » Registration 2024 » Registration Cancellation Policy »

Announcements

  • Authors of eligible TMLR publications can submit a request to present in ICLR conference! Find more here.
  • Self-nomination form for ICLR 2024 Reviewing.  Interested in being a reviewer?  Fill out the form!

Sponsors

We are very excited to be holding the ICLR 2024 annual conference in Vienna, Austria this year from May 7-11 2024. 

Become a 2024 Sponsor »

Important Dates

Virtual Only Pass Tue May 7th through Sat the 11th
Conference Sessions and Workshops Tue May 7th through Sat the 11th
Saturday Workshop 1 Day Pass Sat May 11th
Early Registration Deadline Mar 01 '24 12:00 AM CET *
Registration Cancellation Deadline Apr 15 '24 12:00 AM CEST *
All dates »

Timezone: »

About Us

The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.

ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.

Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

A non-exhaustive list of relevant topics explored at the conference include:

  • unsupervised, semi-supervised, and supervised representation learning
  • representation learning for planning and reinforcement learning
  • representation learning for computer vision and natural language processing
  • metric learning and kernel learning
  • sparse coding and dimensionality expansion
  • hierarchical models
  • optimization for representation learning
  • learning representations of outputs or states
  • optimal transport
  • theoretical issues in deep learning
  • societal considerations of representation learning including fairness, safety, privacy, and interpretability, and explainability
  • visualization or interpretation of learned representations
  • implementation issues, parallelization, software platforms, hardware
  • climate, sustainability
  • applications in audio, speech, robotics, neuroscience,  biology, or any other field