# Workshop Poster Sessions

Below are the Workshop Track papers presented at each of the poster sessions (on Monday, Tuesday or Wednesday, in the morning or evening). To find a paper, look for the poster with the corresponding number in the area dedicated to the Workshop Track.

# Note to the Presenters

Each poster panel is 2 meters large and 1 meter tall.

If needed, tape will be provided to fix your poster.

### Monday Morning (April 24th, 10:30am to 12:30pm)

W1: Extrapolation and learning equations

W2: Effectiveness of Transfer Learning in EHR data

W3: Intelligent synapses for multi-task and transfer learning

W4: Unsupervised and Efficient Neural Graph Model with Distributed Representations

W5: Accelerating SGD for Distributed Deep-Learning Using an Approximted Hessian Matrix

W6: Accelerating Eulerian Fluid Simulation With Convolutional Networks

W7: Forced to Learn: Discovering Disentangled Representations Without Exhaustive Labels

W8: Dataset Augmentation in Feature Space

W9: Learning Algorithms for Active Learning

W10: Reinterpreting Importance-Weighted Autoencoders

W11: Robustness to Adversarial Examples through an Ensemble of Specialists

W12: (empty)

W13: On Hyperparameter Optimization in Learning Systems

W14: Recurrent Normalization Propagation

W15: Joint Training of Ratings and Reviews with Recurrent Recommender Networks

W16: Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses

W17: Joint Embeddings of Scene Graphs and Images

W18: Unseen Style Transfer Based on a Conditional Fast Style Transfer Network

### Monday Afternoon (April 24th, 4:30pm to 6:30pm)

W1: Audio Super-Resolution using Neural Networks

W2: Semantic embeddings for program behaviour patterns

W3: De novo drug design with deep generative models : an empirical study

W4: Memory Matching Networks for Genomic Sequence Classification

W5: Char2Wav: End-to-End Speech Synthesis

W6: Fast Chirplet Transform Injects Priors in Deep Learning of Animal Calls and Speech

W7: Weight-averaged consistency targets improve semi-supervised deep learning results

W8: Particle Value Functions

W9: Out-of-class novelty generation: an experimental foundation

W10: Performance guarantees for transferring representations

W11: Generative Adversarial Learning of Markov Chains

W12: Short and Deep: Sketching and Neural Networks

W13: Understanding intermediate layers using linear classifier probes

W14: Symmetry-Breaking Convergence Analysis of Certain Two-layered Neural Networks with ReLU nonlinearity

W15: Neural Combinatorial Optimization with Reinforcement Learning

W16: Tactics of Adversarial Attacks on Deep Reinforcement Learning Agents

W17: Adversarial Discriminative Domain Adaptation (workshop extended abstract)

W18: Efficient Sparse-Winograd Convolutional Neural Networks

W19: Neural Expectation Maximization

### Tuesday Morning (April 25th, 10:30am to 12:30pm)

W1: Programming With a Differentiable Forth Interpreter

W2: Unsupervised Feature Learning for Audio Analysis

W3: Neural Functional Programming

W4: A Smooth Optimisation Perspective on Training Feedforward Neural Networks

W5: Synthetic Gradient Methods with Virtual Forward-Backward Networks

W6: Explaining the Learning Dynamics of Direct Feedback Alignment

W7: Training a Subsampling Mechanism in Expectation

W8: Deep Kernel Machines via the Kernel Reparametrization Trick

W9: Encoding and Decoding Representations with Sum- and Max-Product Networks

W10: Embracing Data Abundance

W11: Variational Intrinsic Control

W12: Fast Adaptation in Generative Models with Generative Matching Networks

W13: Efficient variational Bayesian neural network ensembles for outlier detection

W14: Emergence of Language with Multi-agent Games: Learning to Communicate with Sequences of Symbols

W15: Adaptive Feature Abstraction for Translating Video to Language

W16: Delving into adversarial attacks on deep policies

W17: Tuning Recurrent Neural Networks with Reinforcement Learning

W18: DeepMask: Masking DNN Models for robustness against adversarial samples

W19: Restricted Boltzmann Machines provide an accurate metric for retinal responses to visual stimuli

### Tuesday Afternoon (April 25th, 2:00pm to 4:00pm)

W1: Lifelong Perceptual Programming By Example

W2: Neu0

W3: Dance Dance Convolution

W4: Bit-Pragmatic Deep Neural Network Computing

W5: On Improving the Numerical Stability of Winograd Convolutions

W6: Fast Generation for Convolutional Autoregressive Models

W7: THE PREIMAGE OF RECTIFIER NETWORK ACTIVITIES

W8: Training Triplet Networks with GAN

W9: On Robust Concepts and Small Neural Nets

W10: Pl@ntNet app in the era of deep learning

W11: Exponential Machines

W12: Online Multi-Task Learning Using Biased Sampling

W13: Online Structure Learning for Sum-Product Networks with Gaussian Leaves

W14: A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Samples

W15: Compositional Kernel Machines

W16: Loss is its own Reward: Self-Supervision for Reinforcement Learning

W17: REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models

W18: Precise Recovery of Latent Vectors from Generative Adversarial Networks

W19: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization

### Wednesday Morning (April 26th, 10:30am to 12:30pm)

W1: NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD

W2: The High-Dimensional Geometry of Binary Neural Networks

W3: Discovering objects and their relations from entangled scene representations

W4: A Differentiable Physics Engine for Deep Learning in Robotics

W5: Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations

W6: Development of JavaScript-based deep learning platform and application to distributed training

W7: Factorization tricks for LSTM networks

W8: Shake-Shake regularization of 3-branch residual networks

W9: Trace Norm Regularised Deep Multi-Task Learning

W10: Deep Learning with Sets and Point Clouds

W11: Deep Nets Don't Learn via Memorization

W12: Multiplicative LSTM for sequence modelling

W13: Learning to Discover Sparse Graphical Models

W14: Revisiting Batch Normalization For Practical Domain Adaptation

W15: Early Methods for Detecting Adversarial Images and a Colorful Saliency Map

W16: Natural Language Generation in Dialogue using Lexicalized and Delexicalized Data

W17: Coupling Distributed and Symbolic Execution for Natural Language Queries

W18: Adversarial Examples for Semantic Image Segmentation

W19: RenderGAN: Generating Realistic Labeled Data

### Wednesday Afternoon (April 26th, 4:30pm to 6:30pm)

W1: Song From PI: A Musically Plausible Network for Pop Music Generation

W2: Charged Point Normalization: An Efficient Solution to the Saddle Point Problem

W3: Towards “AlphaChem”: Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies

W4: CommAI: Evaluating the first steps towards a useful general AI

W5: Joint Multimodal Learning with Deep Generative Models

W6: Transferring Knowledge to Smaller Network with Class-Distance Loss

W7: Regularizing Neural Networks by Penalizing Confident Output Distributions

W8: Adversarial Attacks on Neural Network Policies

W9: Generalizable Features From Unsupervised Learning

W10: Compact Embedding of Binary-coded Inputs and Outputs using Bloom Filters

W11: Semi-supervised deep learning by metric embedding

W12: Changing Model Behavior at Test-time Using Reinforcement Learning

W13: Variational Reference Priors

W14: Gated Multimodal Units for Information Fusion

W15: Playing SNES in the Retro Learning Environment

W16: Unsupervised Perceptual Rewards for Imitation Learning

W17: Perception Updating Networks: On architectural constraints for interpretable video generative models

W18: Adversarial examples in the physical world