[1:00]
Expressiveness and Approximation Properties of Graph Neural Networks
[1:15]
Neural Collapse Under MSE Loss: Proximity to and Dynamics on the Central Path
[1:30]
Learning Strides in Convolutional Neural Networks
[1:45]
The Hidden Convex Optimization Landscape of Regularized Two-Layer ReLU Networks: an Exact Characterization of Optimal Solutions
[2:00]
Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond
[2:15]
DISCOVERING AND EXPLAINING THE REPRESENTATION BOTTLENECK OF DNNS
[2:30]
Representational Continuity for Unsupervised Continual Learning