Skip to yearly menu bar Skip to main content


Show Detail
Timezone: Europe/Vienna
 
Filter Rooms:  

MON 3 MAY
9 a.m.
Remarks:
(ends 10:00 AM)
10 a.m.
Posters 10:00-12:00
(ends 12:00 PM)
noon
Ozan Sener, Yutian Chen, Blake Richards
Orals 12:00-12:30
[12:00] Dataset Condensation with Gradient Matching
[12:15] Free Lunch for Few-shot Learning: Distribution Calibration
Spotlights 12:30-12:50
[12:30] Deciphering and Optimizing Multi-Task Learning: a Random Matrix Approach
[12:40] Generalization in data-driven models of primary visual cortex
Q&As 12:50-1:00
[12:50] Q&A
Orals 1:00-1:30
[1:00] Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding
[1:15] A Distributional Approach to Controlled Text Generation
Spotlights 1:30-1:50
[1:30] The Intrinsic Dimension of Images and Its Impact on Learning
[1:40] How Benign is Benign Overfitting ?
Q&As 1:50-2:00
[1:50] Q&A
Orals 2:00-2:45
[2:00] Geometry-aware Instance-reweighted Adversarial Training
[2:15] Do 2D GANs Know 3D Shape? Unsupervised 3D Shape Reconstruction from 2D Image GANs
[2:30] Rethinking the Role of Gradient-based Attribution Methods for Model Interpretability
Spotlights 2:45-2:55
[2:45] Contrastive Divergence Learning is a Time Reversal Adversarial Game
Q&As 2:55-3:05
[2:55] Q&A
(ends 3:05 PM)
3:15 p.m.
Break:
(ends 5:00 PM)
5 p.m.
Invited Talk:
Timnit Gebru
(ends 6:00 PM)
6 p.m.
Posters 6:00-8:00
(ends 8:00 PM)
8 p.m.
Orals 8:00-8:45
[8:00] Federated Learning Based on Dynamic Regularization
[8:15] Gradient Projection Memory for Continual Learning
[8:30] Growing Efficient Deep Networks by Structured Continuous Sparsification
Spotlights 8:45-8:55
[8:45] Geometry-Aware Gradient Algorithms for Neural Architecture Search
Q&As 8:55-9:05
[8:55] Q&A
Spotlights 9:05-10:05
[9:05] Generalization bounds via distillation
[9:15] On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
[9:25] Sharpness-aware Minimization for Efficiently Improving Generalization
[9:35] Systematic generalisation with group invariant predictions
[9:45] On Statistical Bias In Active Learning: How and When to Fix It
[9:55] Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images
Q&As 10:05-10:20
[10:05] Q&A
Spotlights 10:20-11:10
[10:20] Uncertainty Sets for Image Classifiers using Conformal Prediction
[10:30] PMI-Masking: Principled masking of correlated spans
[10:40] Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models
[10:50] Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration
[11:00] Predicting Infectiousness for Proactive Contact Tracing
Q&As 11:10-11:23
[11:10] Q&A
(ends 11:23 PM)
11:30 p.m.
Break:
(ends 1:00 AM)

TUE 4 MAY
1 a.m.
Invited Talk:
Yejin Choi
(ends 2:00 AM)
2 a.m.
Posters 2:00-4:00
(ends 4:00 AM)
4 a.m.
Orals 4:00-4:45
[4:00] SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments
[4:15] Contrastive Explanations for Reinforcement Learning via Embedded Self Predictions
[4:30] Parrot: Data-Driven Behavioral Priors for Reinforcement Learning
Spotlights 4:45-5:05
[4:45] Structured Prediction as Translation between Augmented Natural Languages
[4:55] Mathematical Reasoning via Self-supervised Skip-tree Training
Q&As 5:05-5:18
[5:05] Q&A
Spotlights 5:18-6:08
[5:18] Improving Adversarial Robustness via Channel-wise Activation Suppressing
[5:28] Fast Geometric Projections for Local Robustness Certification
[5:38] Information Laundering for Model Privacy
[5:48] Dataset Inference: Ownership Resolution in Machine Learning
[5:58] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark
Q&As 6:08-6:21
[6:08] Q&A
Orals 6:21-6:36
[6:21] How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks
Spotlights 6:36-7:06
[6:36] Graph Convolution with Low-rank Learnable Local Filters
[6:46] The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings
[6:56] Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning
Q&As 7:06-7:16
[7:06] Q&A
(ends 7:16 AM)
7:30 a.m.
Break:
(ends 7:30 AM)
9 a.m.
Invited Talk:
Michael Bronstein
(ends 10:00 AM)
10 a.m.
Posters 10:00-12:00
(ends 12:00 PM)
noon
Orals 12:00-12:15
[12:00] End-to-end Adversarial Text-to-Speech
Spotlights 12:15-12:55
[12:15] Autoregressive Entity Retrieval
[12:25] Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
[12:35] Expressive Power of Invariant and Equivariant Graph Neural Networks
[12:45] Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs
Q&As 12:55-1:08
[12:55] Q&A
Orals 1:08-1:38
[1:08] Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator
[1:23] Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes
Spotlights 1:38-1:58
[1:38] Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows
[1:48] Noise against noise: stochastic label noise helps combat inherent label noise
Q&As 1:58-2:08
[1:58] Q&A
Spotlights 2:08-2:48
[2:08] Mutual Information State Intrinsic Control
[2:18] Learning Incompressible Fluid Dynamics from Scratch - Towards Fast, Differentiable Fluid Models that Generalize
[2:28] Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies
[2:38] Fidelity-based Deep Adiabatic Scheduling
Q&As 2:48-2:58
[2:48] Q&A
(ends 2:58 PM)
3 p.m.
Break:
(ends 5:00 PM)
5 p.m.
Invited Talk:
Manuela Veloso
(ends 6:00 PM)
6 p.m.
Posters 6:00-8:00
(ends 8:00 PM)
8 p.m.
Orals 8:00-8:30
[8:00] Iterated learning for emergent systematicity in VQA
[8:15] Learning Generalizable Visual Representations via Interactive Gameplay
Spotlights 8:30-8:50
[8:30] How Does Mixup Help With Robustness and Generalization?
[8:40] Recurrent Independent Mechanisms
Q&As 8:50-9:00
[8:50] Q&A
Orals 9:00-9:30
[9:00] Randomized Automatic Differentiation
[9:15] Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering
Spotlights 9:30-10:00
[9:30] Mind the Pad -- CNNs Can Develop Blind Spots
[9:40] Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time
[9:50] Learning from Protein Structure with Geometric Vector Perceptrons
Q&As 10:00-10:13
[10:00] Q&A
Orals 10:13-10:28
[10:13] On the mapping between Hopfield networks and Restricted Boltzmann Machines
Spotlights 10:28-10:48
[10:28] Learning-based Support Estimation in Sublinear Time
[10:38] Long-tail learning via logit adjustment
Q&As 10:48-10:56
[10:48] Q&A
(ends 10:56 PM)
11 p.m.

WED 5 MAY
midnight
Break:
(ends 1:00 AM)
1 a.m.
Town Hall:
(ends 2:00 AM)
2 a.m.
Posters 2:00-4:00
(ends 4:00 AM)
4 a.m.
Orals 4:00-4:15
[4:00] Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients
Spotlights 4:15-4:45
[4:15] DDPNOpt: Differential Dynamic Programming Neural Optimizer
[4:25] Orthogonalizing Convolutional Layers with the Cayley Transform
[4:35] Model-Based Visual Planning with Self-Supervised Functional Distances
Q&As 4:45-4:55
[4:45] Q&A
Orals 4:55-5:10
[4:55] Global Convergence of Three-layer Neural Networks in the Mean Field Regime
Spotlights 5:10-5:50
[5:10] Minimum Width for Universal Approximation
[5:20] Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic Method using Deep Denoising Priors
[5:30] Individually Fair Gradient Boosting
[5:40] Are Neural Rankers still Outperformed by Gradient Boosted Decision Trees?
Q&As 5:50-6:03
[5:50] Q&A
Orals 6:03-6:33
[6:03] Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity
[6:18] MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training
Spotlights 6:33-7:03
[6:33] Locally Free Weight Sharing for Network Width Search
[6:43] Memory Optimization for Deep Networks
[6:53] Neural Topic Model via Optimal Transport
Q&As 7:03-7:16
[7:03] Q&A
(ends 7:16 AM)
7:30 a.m.
Break:
(ends 7:30 AM)
9 a.m.
Invited Talk:
Lourdes Agapito
(ends 10:00 AM)
10 a.m.
Posters 10:00-12:00
(ends 12:00 PM)
noon
Orals 12:00-12:45
[12:00] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
[12:15] Rethinking Attention with Performers
[12:30] Share or Not? Learning to Schedule Language-Specific Capacity for Multilingual Translation
Spotlights 12:45-12:55
[12:45] Support-set bottlenecks for video-text representation learning
Q&As 12:55-1:05
[12:55] Q&A
Orals 1:05-1:20
[1:05] Getting a CLUE: A Method for Explaining Uncertainty Estimates
Spotlights 1:20-1:50
[1:20] Influence Estimation for Generative Adversarial Networks
[1:30] Stabilized Medical Image Attacks
[1:40] Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Q&As 1:50-2:00
[1:50] Q&A
Orals 2:00-2:15
[2:00] Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency
Spotlights 2:15-2:55
[2:15] Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods
[2:25] Tent: Fully Test-Time Adaptation by Entropy Minimization
[2:35] Neural Approximate Sufficient Statistics for Implicit Models
[2:45] Implicit Normalizing Flows
Q&As 2:55-3:08
[2:55] Q&A
(ends 3:08 PM)
3:15 p.m.
Break:
(ends 5:00 PM)
5 p.m.
Invited Talk:
Kate Saenko
(ends 6:00 PM)
6 p.m.
Posters 6:00-8:00
(ends 8:00 PM)
8 p.m.
Orals 8:00-9:00
[8:00] Human-Level Performance in No-Press Diplomacy via Equilibrium Search
[8:15] Learning to Reach Goals via Iterated Supervised Learning
[8:30] Learning Invariant Representations for Reinforcement Learning without Reconstruction
[8:45] Evolving Reinforcement Learning Algorithms
Spotlights 9:00-9:10
[9:00] Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
Q&As 9:10-9:23
[9:10] Q&A
Orals 9:23-9:38
[9:23] Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies
Spotlights 9:38-10:08
[9:38] Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy
[9:48] LambdaNetworks: Modeling long-range Interactions without Attention
[9:58] Grounded Language Learning Fast and Slow
Q&As 10:08-10:18
[10:08] Q&A
Spotlights 10:18-11:08
[10:18] Unsupervised Object Keypoint Learning using Local Spatial Predictability
[10:28] VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
[10:38] Dynamic Tensor Rematerialization
[10:48] A Gradient Flow Framework For Analyzing Network Pruning
[10:58] Differentially Private Learning Needs Better Features (or Much More Data)
Q&As 11:08-11:21
[11:08] Q&A
(ends 11:21 PM)

THU 6 MAY
midnight
Break:
(ends 1:00 AM)
1 a.m.
Orals 1:00-1:45
[1:00] Neural Synthesis of Binaural Speech From Mono Audio
[1:15] EigenGame: PCA as a Nash Equilibrium
[1:30] Score-Based Generative Modeling through Stochastic Differential Equations
Spotlights 1:45-1:55
[1:45] Learning Mesh-Based Simulation with Graph Networks
Q&As 1:55-2:05
[1:55] Q&A
(ends 2:05 AM)
2 a.m.
Posters 2:00-4:00
(ends 4:00 AM)
4 a.m.
Orals 4:00-4:15
[4:00] Improved Autoregressive Modeling with Distribution Smoothing
Spotlights 4:15-4:45
[4:15] GAN "Steerability" without optimization
[4:25] Large Scale Image Completion via Co-Modulated Generative Adversarial Networks
[4:35] Emergent Symbols through Binding in External Memory
Q&As 4:45-4:55
[4:45] Q&A
Orals 4:55-5:10
[4:55] Deformable DETR: Deformable Transformers for End-to-End Object Detection
Spotlights 5:10-6:00
[5:10] Graph-Based Continual Learning
[5:20] Understanding the role of importance weighting for deep learning
[5:30] Towards Robustness Against Natural Language Word Substitutions
[5:40] Undistillable: Making A Nasty Teacher That CANNOT teach students
[5:50] CPT: Efficient Deep Neural Network Training via Cyclic Precision
Q&As 6:00-6:15
[6:00] Q&A
Spotlights 6:15-6:55
[6:15] PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable Physics
[6:25] Regularization Matters in Policy Optimization - An Empirical Study on Continuous Control
[6:35] Regularized Inverse Reinforcement Learning
[6:45] Behavioral Cloning from Noisy Demonstrations
Q&As 6:55-7:05
[6:55] Q&A
(ends 7:05 AM)
7:15 a.m.
Break:
(ends 7:15 AM)
9 a.m.
Orals 9:00-9:45
[9:00] Rethinking Architecture Selection in Differentiable NAS
[9:15] Complex Query Answering with Neural Link Predictors
[9:30] Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime
Spotlights 9:45-9:55
[9:45] Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with $1/n$ Parameters
Q&As 9:55-10:05
[9:55] Q&A
(ends 10:05 AM)
10 a.m.
Posters 10:00-12:00
(ends 12:00 PM)
noon
Orals 12:00-12:15
[12:00] What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study
Spotlights 12:15-1:05
[12:15] Winning the L2RPN Challenge: Power Grid Management via Semi-Markov Afterstate Actor-Critic
[12:25] UPDeT: Universal Multi-agent RL via Policy Decoupling with Transformers
[12:35] Quantifying Differences in Reward Functions
[12:45] Iterative Empirical Game Solving via Single Policy Best Response
[12:55] Discovering a set of policies for the worst case reward
Q&As 1:05-1:20
[1:05] Q&A
Orals 1:20-1:35
[1:20] Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting
Spotlights 1:35-2:25
[1:35] Unlearnable Examples: Making Personal Data Unexploitable
[1:45] Self-supervised Visual Reinforcement Learning with Object-centric Representations
[1:55] On Self-Supervised Image Representations for GAN Evaluation
[2:05] Retrieval-Augmented Generation for Code Summarization via Hybrid GNN
[2:15] Practical Real Time Recurrent Learning with a Sparse Approximation
Q&As 2:25-2:40
[2:25] Q&A
(ends 2:40 PM)
3 p.m.
Break:
(ends 5:00 PM)
5 p.m.
6 p.m.
Posters 6:00-8:00
(ends 8:00 PM)
8 p.m.
Orals 8:00-9:00
[8:00] VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments
[8:15] SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
[8:30] When Do Curricula Work?
[8:45] Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?
Q&As 9:00-9:10
[9:00] Q&A
Spotlights 9:10-9:50
[9:10] Correcting experience replay for multi-agent communication
[9:20] Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning
[9:30] DeepAveragers: Offline Reinforcement Learning By Solving Derived Non-Parametric MDPs
[9:40] Data-Efficient Reinforcement Learning with Self-Predictive Representations
Q&As 9:50-10:00
[9:50] Q&A
Orals 10:00-10:30
[10:00] DiffWave: A Versatile Diffusion Model for Audio Synthesis
[10:15] Self-training For Few-shot Transfer Across Extreme Task Differences
Spotlights 10:30-11:00
[10:30] A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
[10:40] BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration
[10:50] Disentangled Recurrent Wasserstein Autoencoder
Q&As 11:00-11:13
[11:00] Q&A
(ends 11:13 PM)
11 p.m.
Expo Talk Panel:
(ends 12:00 AM)

FRI 7 MAY
midnight
Break:
(ends 1:00 AM)
1 a.m.
Invited Talk:
Alexei Efros
(ends 2:00 AM)
2 a.m.
Posters 2:00-4:00
(ends 4:00 AM)
4 a.m.
Orals 4:00-4:15
[4:00] Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data
Spotlights 4:15-4:45
[4:15] Long-tailed Recognition by Routing Diverse Distribution-Aware Experts
[4:25] Self-Supervised Policy Adaptation during Deployment
[4:35] What are the Statistical Limits of Offline RL with Linear Function Approximation?
Q&As 4:45-4:55
[4:45] Q&A
Spotlights 4:55-5:45
[4:55] RMSprop converges with proper hyper-parameter
[5:05] A Good Image Generator Is What You Need for High-Resolution Video Synthesis
[5:15] Random Feature Attention
[5:25] Learning with Feature-Dependent Label Noise: A Progressive Approach
[5:35] Sparse Quantized Spectral Clustering
Q&As 5:45-5:58
[5:45] Q&A
Spotlights 5:58-6:38
[5:58] Learning a Latent Simplex in Input Sparsity Time
[6:08] Topology-Aware Segmentation Using Discrete Morse Theory
[6:18] MARS: Markov Molecular Sampling for Multi-objective Drug Discovery
[6:28] Distributional Sliced-Wasserstein and Applications to Generative Modeling
Q&As 6:38-6:48
[6:38] Q&A
(ends 6:48 AM)
7 a.m.
Break:
(ends 7:00 AM)
Remarks:
(ends 8:00 AM)
11:30 a.m.
Workshop:
(ends 8:30 PM)
2:45 p.m.
2:55 p.m.
Workshop:
(ends 12:00 AM)
3 p.m.
Workshop:
(ends 10:55 PM)
Workshop:
(ends 12:00 AM)
Break:
(ends 3:00 PM)
3:30 p.m.
3:45 p.m.
Workshop:
(ends 4:00 AM)
4:55 p.m.
5:30 p.m.
5:45 p.m.
Workshop:
(ends 2:00 AM)
11 p.m.
Break:
(ends 11:00 PM)