- Tara Sainath, Google
Senior Program Chair
- Alexander Rush, Harvard University
- Sergey Levine, UC Berkeley
- Karen Livescu, TTI-Chicago
- Shakir Mohamed, Google DeepMind
- Been Kim, Google Brain
- Graham Taylor, University of Guelph / Vector Institute
- Alice Oh, KAIST
- Richard Zemel, University of Toronto / Vector Institute
The organizers can be contacted here.
- Registration will open on January 29, 2019
- Student travel award and Volunteer applications will open with registration
- Do not book travel until you have registered for the conference
- Apply for your visa (if necessary) by March 7, 2019. See our Registration Cancellation Policy
- Sep 27, Paper Submission Deadline
- Late Jan 2019, Registration Opens
- Mar 19, Early Reg Pricing Deadline
- Apr 16, Reg Cancellation Deadline
New Orleans, Louisiana
May 6 – 9, 2019
The International Conference on Learning Representations (ICLR) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence called representation learning, but generally referred to as deep learning.
ICLR is globally renowned for presenting and publishing cutting-edge research on all aspects of deep learning used in the fields of artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, text understanding, gaming, and robotics.
ICLR is one of the fastest growing artificial intelligence conferences in the world. Between May 6 and May 9, 2019, at the New Orleans Ernest N. Morial Convention Center in New Orleans, Louisiana, ICLR will host over 4,000 participants. Participants at ICLR span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.
The rapidly developing field of deep learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. ICLR takes a broad view of the field and includes topics such as feature learning, metric learning, compositional modeling, structured prediction, reinforcement learning, and issues regarding large-scale learning and non-convex optimization.
A non-exhaustive list of relevant topics explored at the conference include:
- unsupervised, semi-supervised, and supervised representation learning
- representation learning for planning and reinforcement learning
- metric learning and kernel learning
- sparse coding and dimensionality expansion
- hierarchical models
- optimization for representation learning
- learning representations of outputs or states
- implementation issues, parallelization, software platforms, hardware
- applications in vision, audio, speech, natural language processing, robotics, neuroscience, or any other field