ICLR 2017

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

iclr2017:conference_posters [2017/04/07 11:12]
hugo
iclr2017:conference_posters [2017/04/23 09:26] (current)
hugo
Line 2: Line 2:
  
 Below are the Conference Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Conference Track. Below are the Conference Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Conference Track.
 +
 +======Note to the Presenters=======
 +Each poster panel is 2 meters large and 1 meter tall.\\
 +If needed, tape will be provided to fix your poster.
 +
  
 <​html><​div id='​monday_morning'></​div></​html>​ <​html><​div id='​monday_morning'></​div></​html>​
Line 17: Line 22:
 C11: Pruning Filters for Efficient ConvNets\\ C11: Pruning Filters for Efficient ConvNets\\
 C12: Stick-Breaking Variational Autoencoders\\ C12: Stick-Breaking Variational Autoencoders\\
-C13: Understanding deep learning requires rethinking generalization\\+C13: Identity Matters in Deep Learning\\
 C14: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima\\ C14: On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima\\
 C15: Recurrent Hidden Semi-Markov Model\\ C15: Recurrent Hidden Semi-Markov Model\\
Line 53: Line 58:
 C11: PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications\\ C11: PixelCNN++: A PixelCNN Implementation with Discretized Logistic Mixture Likelihood and Other Modifications\\
 C12: Learning to Optimize\\ C12: Learning to Optimize\\
-C13: Training Compressed Fully-Connected Networks with a Density-Diversity Penalty\\+C13: Do Deep Convolutional Nets Really Need to be Deep and Convolutional?​\\
 C14: Optimal Binary Autoencoding with Pairwise Correlations\\ C14: Optimal Binary Autoencoding with Pairwise Correlations\\
 C15: On the Quantitative Analysis of Decoder-Based Generative Models\\ C15: On the Quantitative Analysis of Decoder-Based Generative Models\\
Line 112: Line 117:
  
 <​html><​div id='​tuesday_afternoon'></​div></​html>​ <​html><​div id='​tuesday_afternoon'></​div></​html>​
-====Tuesday Afternoon (April 25th, 2:30pm to 4:30pm)====+====Tuesday Afternoon (April 25th, 2:00pm to 4:00pm)====
 C1: Sigma Delta Quantized Networks\\ C1: Sigma Delta Quantized Networks\\
 C2: Paleo: A Performance Model for Deep Neural Networks\\ C2: Paleo: A Performance Model for Deep Neural Networks\\
Line 171: Line 176:
 C21: Temporal Ensembling for Semi-Supervised Learning\\ C21: Temporal Ensembling for Semi-Supervised Learning\\
 C22: On Detecting Adversarial Perturbations\\ C22: On Detecting Adversarial Perturbations\\
-C23: Identity Matters in Deep Learning\\+C23: Understanding deep learning requires rethinking generalization\\
 C24: Adversarial Feature Learning\\ C24: Adversarial Feature Learning\\
 C25: Learning through Dialogue Interactions\\ C25: Learning through Dialogue Interactions\\
Line 199: Line 204:
 C13: Support Regularized Sparse Coding and Its Fast Encoder\\ C13: Support Regularized Sparse Coding and Its Fast Encoder\\
 C14: Discrete Variational Autoencoders\\ C14: Discrete Variational Autoencoders\\
-C15: Do Deep Convolutional Nets Really Need to be Deep and Convolutional?​\\+C15: Training Compressed Fully-Connected Networks with a Density-Diversity Penalty\\
 C16: Efficient Representation of Low-Dimensional Manifolds using Deep Networks\\ C16: Efficient Representation of Low-Dimensional Manifolds using Deep Networks\\
 C17: Semi-Supervised Classification with Graph Convolutional Networks\\ C17: Semi-Supervised Classification with Graph Convolutional Networks\\