firstbacksecondback
12 Results
Poster
|
Wed 10:30 |
Large Learning Rate Tames Homogeneity: Convergence and Balancing Effect Yuqing Wang · Minshuo Chen · Tuo Zhao · Molei Tao |
|
Spotlight
|
Tue 10:30 |
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework Zhiyuan Li · Tianhao Wang · Sanjeev Arora |
|
Poster
|
Tue 10:30 |
Understanding Dimensional Collapse in Contrastive Self-supervised Learning Li Jing · Pascal Vincent · Yann LeCun · Yuandong Tian |
|
Poster
|
Tue 10:30 |
What Happens after SGD Reaches Zero Loss? --A Mathematical Framework Zhiyuan Li · Tianhao Wang · Sanjeev Arora |
|
Poster
|
Tue 18:30 |
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution Ananya Kumar · Aditi Raghunathan · Robbie Jones · Tengyu Ma · Percy Liang |
|
Oral
|
Wed 9:00 |
Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution Ananya Kumar · Aditi Raghunathan · Robbie Jones · Tengyu Ma · Percy Liang |
|
Poster
|
Mon 10:30 |
Noisy Feature Mixup Soon Hoe Lim · N. Benjamin Erichson · Francisco Utrera · Winnie Xu · Michael W Mahoney |
|
Poster
|
Mon 18:30 |
An Unconstrained Layer-Peeled Perspective on Neural Collapse Wenlong Ji · Yiping Lu · Yiliang Zhang · Zhun Deng · Weijie J Su |
|
Poster
|
Wed 10:30 |
Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks Benjamin Bowman · Guido Montufar |
|
Poster
|
Mon 18:30 |
Training invariances and the low-rank phenomenon: beyond linear networks Thien Le · Stefanie Jegelka |
|
Poster
|
Mon 10:30 |
Stochastic Training is Not Necessary for Generalization Jonas Geiping · Micah Goldblum · Phil Pope · Michael Moeller · Tom Goldstein |
|
Poster
|
Mon 18:30 |
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit Regularization Tolga Ergen · Arda Sahiner · Batu Ozturkler · John M Pauly · Morteza Mardani · Mert Pilanci |