ICLR 2017

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

iclr2017:workshop_posters [2017/03/29 09:23]
hugo
iclr2017:workshop_posters [2017/04/23 09:27] (current)
hugo
Line 2: Line 2:
  
 Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Workshop Track. Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Workshop Track.
 +
 +======Note to the Presenters=======
 +Each poster panel is 2 meters large and 1 meter tall.\\
 +If needed, tape will be provided to fix your poster.
  
 <​html><​div id='​monday_morning'></​div></​html>​ <​html><​div id='​monday_morning'></​div></​html>​
Line 12: Line 16:
 W6: Accelerating Eulerian Fluid Simulation With Convolutional Networks\\ W6: Accelerating Eulerian Fluid Simulation With Convolutional Networks\\
 W7: Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels\\ W7: Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels\\
-W8: Deep Nets Don't Learn via Memorization\\+W8: Dataset Augmentation in Feature Space\\
 W9: Learning Algorithms for Active Learning\\ W9: Learning Algorithms for Active Learning\\
 W10: Reinterpreting Importance-Weighted Autoencoders\\ W10: Reinterpreting Importance-Weighted Autoencoders\\
 W11: Robustness to Adversarial Examples through an Ensemble of Specialists\\ W11: Robustness to Adversarial Examples through an Ensemble of Specialists\\
-W12: Neural Expectation Maximization\\+W12: (empty) ​\\
 W13: On Hyperparameter Optimization in Learning Systems\\ W13: On Hyperparameter Optimization in Learning Systems\\
 W14: Recurrent Normalization Propagation\\ W14: Recurrent Normalization Propagation\\
Line 23: Line 27:
 W17: Joint Embeddings of Scene Graphs and Images\\ W17: Joint Embeddings of Scene Graphs and Images\\
 W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network\\ W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network\\
 +
  
 <​html><​div id='​monday_afternoon'></​div></​html>​ <​html><​div id='​monday_afternoon'></​div></​html>​
Line 44: Line 49:
 W17: Adversarial Discriminative Domain Adaptation (workshop extended abstract)\\ W17: Adversarial Discriminative Domain Adaptation (workshop extended abstract)\\
 W18: Efficient Sparse-Winograd Convolutional Neural Networks\\ W18: Efficient Sparse-Winograd Convolutional Neural Networks\\
 +W19: Neural Expectation Maximization\\
 +
  
 <​html><​div id='​tuesday_morning'></​div></​html>​ <​html><​div id='​tuesday_morning'></​div></​html>​
Line 68: Line 75:
  
 <​html><​div id='​tuesday_afternoon'></​div></​html>​ <​html><​div id='​tuesday_afternoon'></​div></​html>​
-====Tuesday Afternoon (April 25th, 4:30pm to 6:30pm)====+====Tuesday Afternoon (April 25th, 2:00pm to 4:00pm)====
 W1: Lifelong Perceptual Programming By Example\\ W1: Lifelong Perceptual Programming By Example\\
 W2: Neu0\\ W2: Neu0\\
Line 85: Line 92:
 W15: Compositional Kernel Machines\\ W15: Compositional Kernel Machines\\
 W16: Loss is its own Reward: Self-Supervision for Reinforcement Learning\\ W16: Loss is its own Reward: Self-Supervision for Reinforcement Learning\\
-W17: Changing Model Behavior at Test-time Using Reinforcement Learning\\+W17: REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models\\
 W18: Precise Recovery of Latent Vectors from Generative Adversarial Networks\\ W18: Precise Recovery of Latent Vectors from Generative Adversarial Networks\\
 W19: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization\\ W19: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization\\
Line 101: Line 108:
 W9: Trace Norm Regularised Deep Multi-Task Learning\\ W9: Trace Norm Regularised Deep Multi-Task Learning\\
 W10: Deep Learning with Sets and Point Clouds\\ W10: Deep Learning with Sets and Point Clouds\\
-W11: Dataset Augmentation in Feature Space\\+W11: Deep Nets Don't Learn via Memorization\\
 W12: Multiplicative LSTM for sequence modelling\\ W12: Multiplicative LSTM for sequence modelling\\
 W13: Learning to Discover Sparse Graphical Models\\ W13: Learning to Discover Sparse Graphical Models\\
Line 124: Line 131:
 W10: Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters\\ W10: Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters\\
 W11: Semi-supervised deep learning by metric embedding\\ W11: Semi-supervised deep learning by metric embedding\\
-W12: REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models\\+W12: Changing Model Behavior at Test-time Using Reinforcement Learning\\
 W13: Variational Reference Priors\\ W13: Variational Reference Priors\\
 W14: Gated Multimodal Units for Information Fusion\\ W14: Gated Multimodal Units for Information Fusion\\