Workshop
|
Fri 5:10
|
Spotlight 2: Yibo Yang and Stephan Mandt, Lower Bounding Rate-Distortion From Samples
|
|
Poster
|
Mon 1:00
|
Overfitting for Fun and Profit: Instance-Adaptive Data Compression
Ties van Rozendaal · Iris Huijben · Taco Cohen
|
|
Workshop
|
Fri 5:05
|
Spotlight 1: Lucas Theis & Aaron Wagner, A coding theorem for the rate-distortion-perception function
|
|
Workshop
|
Fri 11:45
|
Spotlight 9: George Zhang et al., Universal Rate-Distortion-Perception Representations for Lossy Compression
|
|
Poster
|
Wed 17:00
|
AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
Yuchen Jin · Tianyi Zhou · Liangyu Zhao · Yibo Zhu · Chuanxiong Guo · Marco Canini · Arvind Krishnamurthy
|
|
Poster
|
Thu 1:00
|
AdamP: Slowing Down the Slowdown for Momentum Optimizers on Scale-invariant Weights
Byeongho Heo · Sanghyuk Chun · Seong Joon Oh · Dongyoon Han · Sangdoo Yun · Gyuwan Kim · Youngjung Uh · Jung-Woo Ha
|
|
Spotlight
|
Wed 5:15
|
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods
Taiji Suzuki · Akiyama Shunta
|
|
Poster
|
Mon 17:00
|
Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods
Taiji Suzuki · Akiyama Shunta
|
|
Poster
|
Tue 9:00
|
On the Origin of Implicit Regularization in Stochastic Gradient Descent
Samuel Smith · Benoit Dherin · David Barrett · Soham De
|
|
Poster
|
Thu 1:00
|
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime
Atsushi Nitanda · Taiji Suzuki
|
|
Oral
|
Thu 0:30
|
Optimal Rates for Averaged Stochastic Gradient Descent under Neural Tangent Kernel Regime
Atsushi Nitanda · Taiji Suzuki
|
|