Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: GroundedML: Anchoring Machine Learning in Classical Algorithmic Theory

Distribution-dependent generalization bounds for noisy, iterative learning algorithms

Gintare Dziugaite


Abstract:

Deep learning approaches dominate in many application areas. Still, our understanding of generalization in deep learning is incomplete. Despite progress uncovering and understanding phenomenal underlying strong generalization, there are significant gaps. In this talk, I will discuss barriers to understanding generalization using traditional tools from statistical learning theory and explain why any explanation of generalization in deep learning must be data-dependent. I will talk about the role of empirical evaluation of theories of deep learning and propose a framework for how we ought to evaluate theories empirically. I will then discuss my work on information-theoretic approaches to understanding generalization of noisy, iterative learning algorithms, such as Stochastic Gradient Langevin Dynamics, a noisy version of SGD.

Chat is not available.