Show Detail » |
Timezone: |

MON 3 MAY

midnight

1 a.m.

(ends 3:00 AM)

3 a.m.

Ozan Sener, Yutian Chen, Blake Richards

Orals 3:00-3:30

[3:00]
Dataset Condensation with Gradient Matching

[3:15]
Free Lunch for Few-shot Learning: Distribution Calibration

Spotlights 3:30-3:50

[3:30]
Deciphering and Optimizing Multi-Task Learning: a Random Matrix Approach

[3:40]
Generalization in data-driven models of primary visual cortex

Q&As 3:50-4:00

[3:50]
Q&A

Orals 4:00-4:30

[4:00]
Towards Nonlinear Disentanglement in Natural Data with Temporal Sparse Coding

[4:15]
A Distributional Approach to Controlled Text Generation

Spotlights 4:30-4:50

[4:30]
The Intrinsic Dimension of Images and Its Impact on Learning

[4:40]
How Benign is Benign Overfitting ?

Q&As 4:50-5:00

[4:50]
Q&A

Orals 5:00-5:45

[5:00]
Geometry-aware Instance-reweighted Adversarial Training

[5:15]
Do 2D GANs Know 3D Shape? Unsupervised 3D Shape Reconstruction from 2D Image GANs

[5:30]
Rethinking the Role of Gradient-based Attribution Methods for Model Interpretability

Spotlights 5:45-5:55

[5:45]
Contrastive Divergence Learning is a Time Reversal Adversarial Game

Q&As 5:55-6:05

[5:55]
Q&A

(ends 6:05 AM)

6:15 a.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

8 a.m.

9 a.m.

(ends 11:00 AM)

11 a.m.

Orals 11:00-11:45

[11:00]
Federated Learning Based on Dynamic Regularization

[11:15]
Gradient Projection Memory for Continual Learning

[11:30]
Growing Efficient Deep Networks by Structured Continuous Sparsification

Spotlights 11:45-11:55

[11:45]
Geometry-Aware Gradient Algorithms for Neural Architecture Search

Q&As 11:55-12:05

[11:55]
Q&A

Spotlights 12:05-1:05

[12:05]
Generalization bounds via distillation

[12:15]
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers

[12:25]
Sharpness-aware Minimization for Efficiently Improving Generalization

[12:35]
Systematic generalisation with group invariant predictions

[12:45]
On Statistical Bias In Active Learning: How and When to Fix It

[12:55]
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images

Q&As 1:05-1:20

[1:05]
Q&A

Spotlights 1:20-2:10

[1:20]
Uncertainty Sets for Image Classifiers using Conformal Prediction

[1:30]
PMI-Masking: Principled masking of correlated spans

[1:40]
Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models

[1:50]
Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration

[2:00]
Predicting Infectiousness for Proactive Contact Tracing

Q&As 2:10-2:23

[2:10]
Q&A

(ends 2:23 PM)

2:30 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

4 p.m.

5 p.m.

(ends 7:00 PM)

7 p.m.

Orals 7:00-7:45

[7:00]
SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments

[7:15]
Contrastive Explanations for Reinforcement Learning via Embedded Self Predictions

[7:30]
Parrot: Data-Driven Behavioral Priors for Reinforcement Learning

Spotlights 7:45-8:05

[7:45]
Structured Prediction as Translation between Augmented Natural Languages

[7:55]
Mathematical Reasoning via Self-supervised Skip-tree Training

Q&As 8:05-8:18

[8:05]
Q&A

Spotlights 8:18-9:08

[8:18]
Improving Adversarial Robustness via Channel-wise Activation Suppressing

[8:28]
Fast Geometric Projections for Local Robustness Certification

[8:38]
Information Laundering for Model Privacy

[8:48]
Dataset Inference: Ownership Resolution in Machine Learning

[8:58]
HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

Q&As 9:08-9:21

[9:08]
Q&A

Orals 9:21-9:36

[9:21]
How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks

Spotlights 9:36-10:06

[9:36]
Graph Convolution with Low-rank Learnable Local Filters

[9:46]
The Traveling Observer Model: Multi-task Learning Through Spatial Variable Embeddings

[9:56]
Meta-GMVAE: Mixture of Gaussian VAE for Unsupervised Meta-Learning

Q&As 10:06-10:16

[10:06]
Q&A

(ends 10:16 PM)

10:30 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

TUE 4 MAY

midnight

Invited Talk:

Michael Bronstein

(ends 1:00 AM)

1 a.m.

(ends 3:00 AM)

3 a.m.

Orals 3:00-3:15

[3:00]
End-to-end Adversarial Text-to-Speech

Spotlights 3:15-3:55

[3:15]
Autoregressive Entity Retrieval

[3:25]
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking

[3:35]
Expressive Power of Invariant and Equivariant Graph Neural Networks

[3:45]
Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs

Q&As 3:55-4:08

[3:55]
Q&A

Orals 4:08-4:38

[4:08]
Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator

[4:23]
Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes

Spotlights 4:38-4:58

[4:38]
Multivariate Probabilistic Time Series Forecasting via Conditioned Normalizing Flows

[4:48]
Noise against noise: stochastic label noise helps combat inherent label noise

Q&As 4:58-5:08

[4:58]
Q&A

Spotlights 5:08-5:48

[5:08]
Mutual Information State Intrinsic Control

[5:18]
Learning Incompressible Fluid Dynamics from Scratch - Towards Fast, Differentiable Fluid Models that Generalize

[5:28]
Identifying nonlinear dynamical systems with multiple time scales and long-range dependencies

[5:38]
Fidelity-based Deep Adiabatic Scheduling

Q&As 5:48-5:58

[5:48]
Q&A

(ends 5:58 AM)

6 a.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

8 a.m.

9 a.m.

(ends 11:00 AM)

11 a.m.

Orals 11:00-11:30

[11:00]
Iterated learning for emergent systematicity in VQA

[11:15]
Learning Generalizable Visual Representations via Interactive Gameplay

Spotlights 11:30-11:50

[11:30]
How Does Mixup Help With Robustness and Generalization?

[11:40]
Recurrent Independent Mechanisms

Q&As 11:50-12:00

[11:50]
Q&A

Orals 12:00-12:30

[12:00]
Randomized Automatic Differentiation

[12:15]
Image GANs meet Differentiable Rendering for Inverse Graphics and Interpretable 3D Neural Rendering

Spotlights 12:30-1:00

[12:30]
Mind the Pad -- CNNs Can Develop Blind Spots

[12:40]
Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time

[12:50]
Learning from Protein Structure with Geometric Vector Perceptrons

Q&As 1:00-1:13

[1:00]
Q&A

Orals 1:13-1:28

[1:13]
On the mapping between Hopfield networks and Restricted Boltzmann Machines

Spotlights 1:28-1:48

[1:28]
Learning-based Support Estimation in Sublinear Time

[1:38]
Long-tail learning via logit adjustment

Q&As 1:48-1:56

[1:48]
Q&A

(ends 1:56 PM)

2 p.m.

3 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

4 p.m.

5 p.m.

(ends 7:00 PM)

7 p.m.

Orals 7:00-7:15

[7:00]
Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients

Spotlights 7:15-7:45

[7:15]
DDPNOpt: Differential Dynamic Programming Neural Optimizer

[7:25]
Orthogonalizing Convolutional Layers with the Cayley Transform

[7:35]
Model-Based Visual Planning with Self-Supervised Functional Distances

Q&As 7:45-7:55

[7:45]
Q&A

Orals 7:55-8:10

[7:55]
Global Convergence of Three-layer Neural Networks in the Mean Field Regime

Spotlights 8:10-8:50

[8:10]
Minimum Width for Universal Approximation

[8:20]
Async-RED: A Provably Convergent Asynchronous Block Parallel Stochastic Method using Deep Denoising Priors

[8:30]
Individually Fair Gradient Boosting

[8:40]
Are Neural Rankers still Outperformed by Gradient Boosted Decision Trees?

Q&As 8:50-9:03

[8:50]
Q&A

Orals 9:03-9:33

[9:03]
Co-Mixup: Saliency Guided Joint Mixup with Supermodular Diversity

[9:18]
MONGOOSE: A Learnable LSH Framework for Efficient Neural Network Training

Spotlights 9:33-10:03

[9:33]
Locally Free Weight Sharing for Network Width Search

[9:43]
Memory Optimization for Deep Networks

[9:53]
Neural Topic Model via Optimal Transport

Q&As 10:03-10:16

[10:03]
Q&A

(ends 10:16 PM)

10:30 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

WED 5 MAY

midnight

1 a.m.

(ends 3:00 AM)

3 a.m.

Orals 3:00-3:45

[3:00]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

[3:15]
Rethinking Attention with Performers

[3:30]
Share or Not? Learning to Schedule Language-Specific Capacity for Multilingual Translation

Spotlights 3:45-3:55

[3:45]
Support-set bottlenecks for video-text representation learning

Q&As 3:55-4:05

[3:55]
Q&A

Orals 4:05-4:20

[4:05]
Getting a CLUE: A Method for Explaining Uncertainty Estimates

Spotlights 4:20-4:50

[4:20]
Influence Estimation for Generative Adversarial Networks

[4:30]
Stabilized Medical Image Attacks

[4:40]
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples

Q&As 4:50-5:00

[4:50]
Q&A

Orals 5:00-5:15

[5:00]
Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency

Spotlights 5:15-5:55

[5:15]
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods

[5:25]
Tent: Fully Test-Time Adaptation by Entropy Minimization

[5:35]
Neural Approximate Sufficient Statistics for Implicit Models

[5:45]
Implicit Normalizing Flows

Q&As 5:55-6:08

[5:55]
Q&A

(ends 6:08 AM)

6:15 a.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

8 a.m.

9 a.m.

Exploring the Uncertainty Properties of Neural Networks’ Implicit Priors in the Infinite-Width Limit

(ends 11:00 AM)

11 a.m.

Orals 11:00-12:00

[11:00]
Human-Level Performance in No-Press Diplomacy via Equilibrium Search

[11:15]
Learning to Reach Goals via Iterated Supervised Learning

[11:30]
Learning Invariant Representations for Reinforcement Learning without Reconstruction

[11:45]
Evolving Reinforcement Learning Algorithms

Spotlights 12:00-12:10

[12:00]
Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels

Q&As 12:10-12:23

[12:10]
Q&A

Orals 12:23-12:38

[12:23]
Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and (gradient) stable architecture for learning long time dependencies

Spotlights 12:38-1:08

[12:38]
Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy

[12:48]
LambdaNetworks: Modeling long-range Interactions without Attention

[12:58]
Grounded Language Learning Fast and Slow

Q&As 1:08-1:18

[1:08]
Q&A

Spotlights 1:18-2:08

[1:18]
Unsupervised Object Keypoint Learning using Local Spatial Predictability

[1:28]
VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models

[1:38]
Dynamic Tensor Rematerialization

[1:48]
A Gradient Flow Framework For Analyzing Network Pruning

[1:58]
Differentially Private Learning Needs Better Features (or Much More Data)

Q&As 2:08-2:21

[2:08]
Q&A

(ends 2:21 PM)

2 p.m.

3 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

4 p.m.

Orals 4:00-4:45

[4:00]
Neural Synthesis of Binaural Speech From Mono Audio

[4:15]
EigenGame: PCA as a Nash Equilibrium

[4:30]
Score-Based Generative Modeling through Stochastic Differential Equations

Spotlights 4:45-4:55

[4:45]
Learning Mesh-Based Simulation with Graph Networks

Q&As 4:55-5:05

[4:55]
Q&A

(ends 5:05 PM)

5 p.m.

(ends 7:00 PM)

7 p.m.

Orals 7:00-7:15

[7:00]
Improved Autoregressive Modeling with Distribution Smoothing

Spotlights 7:15-7:45

[7:15]
GAN "Steerability" without optimization

[7:25]
Large Scale Image Completion via Co-Modulated Generative Adversarial Networks

[7:35]
Emergent Symbols through Binding in External Memory

Q&As 7:45-7:55

[7:45]
Q&A

Orals 7:55-8:10

[7:55]
Deformable DETR: Deformable Transformers for End-to-End Object Detection

Spotlights 8:10-9:00

[8:10]
Graph-Based Continual Learning

[8:20]
Understanding the role of importance weighting for deep learning

[8:30]
Towards Robustness Against Natural Language Word Substitutions

[8:40]
Undistillable: Making A Nasty Teacher That CANNOT teach students

[8:50]
CPT: Efficient Deep Neural Network Training via Cyclic Precision

Q&As 9:00-9:15

[9:00]
Q&A

Spotlights 9:15-9:55

[9:15]
PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable Physics

[9:25]
Regularization Matters in Policy Optimization - An Empirical Study on Continuous Control

[9:35]
Regularized Inverse Reinforcement Learning

[9:45]
Behavioral Cloning from Noisy Demonstrations

Q&As 9:55-10:05

[9:55]
Q&A

(ends 10:05 PM)

10:15 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

THU 6 MAY

midnight

Orals 12:00-12:45

[12:00]
Rethinking Architecture Selection in Differentiable NAS

[12:15]
Complex Query Answering with Neural Link Predictors

[12:30]
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime

Spotlights 12:45-12:55

[12:45]
Beyond Fully-Connected Layers with Quaternions: Parameterization of Hypercomplex Multiplications with $1/n$ Parameters

Q&As 12:55-1:05

[12:55]
Q&A

(ends 1:05 AM)

1 a.m.

(ends 3:00 AM)

3 a.m.

Orals 3:00-3:15

[3:00]
What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study

Spotlights 3:15-4:05

[3:15]
Winning the L2RPN Challenge: Power Grid Management via Semi-Markov Afterstate Actor-Critic

[3:25]
UPDeT: Universal Multi-agent RL via Policy Decoupling with Transformers

[3:35]
Quantifying Differences in Reward Functions

[3:45]
Iterative Empirical Game Solving via Single Policy Best Response

[3:55]
Discovering a set of policies for the worst case reward

Q&As 4:05-4:20

[4:05]
Q&A

Orals 4:20-4:35

[4:20]
Augmenting Physical Models with Deep Networks for Complex Dynamics Forecasting

Spotlights 4:35-5:25

[4:35]
Unlearnable Examples: Making Personal Data Unexploitable

[4:45]
Self-supervised Visual Reinforcement Learning with Object-centric Representations

[4:55]
On Self-Supervised Image Representations for GAN Evaluation

[5:05]
Retrieval-Augmented Generation for Code Summarization via Hybrid GNN

[5:15]
Practical Real Time Recurrent Learning with a Sparse Approximation

Q&As 5:25-5:40

[5:25]
Q&A

(ends 5:40 AM)

6 a.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

8 a.m.

Invited Talk:

Kyu Jin Cho

(ends 9:00 AM)

9 a.m.

(ends 11:00 AM)

11 a.m.

Orals 11:00-12:00

[11:00]
VCNet and Functional Targeted Regularization For Learning Causal Effects of Continuous Treatments

[11:15]
SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness

[11:30]
When Do Curricula Work?

[11:45]
Why Are Convolutional Nets More Sample-Efficient than Fully-Connected Nets?

Q&As 12:00-12:10

[12:00]
Q&A

Spotlights 12:10-12:50

[12:10]
Correcting experience replay for multi-agent communication

[12:20]
Contrastive Behavioral Similarity Embeddings for Generalization in Reinforcement Learning

[12:30]
DeepAveragers: Offline Reinforcement Learning By Solving Derived Non-Parametric MDPs

[12:40]
Data-Efficient Reinforcement Learning with Self-Predictive Representations

Q&As 12:50-1:00

[12:50]
Q&A

Orals 1:00-1:30

[1:00]
DiffWave: A Versatile Diffusion Model for Audio Synthesis

[1:15]
Self-training For Few-shot Transfer Across Extreme Task Differences

Spotlights 1:30-2:00

[1:30]
A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference

[1:40]
BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration

[1:50]
Disentangled Recurrent Wasserstein Autoencoder

Q&As 2:00-2:13

[2:00]
Q&A

(ends 2:13 PM)

2 p.m.

3 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

4 p.m.

5 p.m.

(ends 7:00 PM)

7 p.m.

Orals 7:00-7:15

[7:00]
Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data

Spotlights 7:15-7:45

[7:15]
Long-tailed Recognition by Routing Diverse Distribution-Aware Experts

[7:25]
Self-Supervised Policy Adaptation during Deployment

[7:35]
What are the Statistical Limits of Offline RL with Linear Function Approximation?

Q&As 7:45-7:55

[7:45]
Q&A

Spotlights 7:55-8:45

[7:55]
RMSprop converges with proper hyper-parameter

[8:05]
A Good Image Generator Is What You Need for High-Resolution Video Synthesis

[8:15]
Random Feature Attention

[8:25]
Learning with Feature-Dependent Label Noise: A Progressive Approach

[8:35]
Sparse Quantized Spectral Clustering

Q&As 8:45-8:58

[8:45]
Q&A

Spotlights 8:58-9:38

[8:58]
Learning a Latent Simplex in Input Sparsity Time

[9:08]
Topology-Aware Segmentation Using Discrete Morse Theory

[9:18]
MARS: Markov Molecular Sampling for Multi-objective Drug Discovery

[9:28]
Distributional Sliced-Wasserstein and Applications to Generative Modeling

Q&As 9:38-9:48

[9:38]
Q&A

(ends 9:48 PM)

10 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

FRI 7 MAY

2:30 a.m.

3:30 a.m.

4:45 a.m.

5 a.m.

5:15 a.m.

Workshop:

(ends 10:00 AM)

5:45 a.m.

5:50 a.m.

5:55 a.m.

6 a.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.

6:30 a.m.

6:45 a.m.

7 a.m.

Workshop:

2nd Workshop on Practical ML for Developing Countries: Learning Under Limited/low Resource Scenarios

(ends 12:15 PM)

Workshop:

(ends 8:30 PM)

7:45 a.m.

7:55 a.m.

8 a.m.

8:30 a.m.

8:45 a.m.

2 p.m.

BREAK:

Please visit the Sponsor Hall, the Socials, and Mentorships.