Skip to yearly menu bar Skip to main content


Poster

Adaptive backtracking for faster optimization

Joao V. Cavalcanti · Laurent Lessard · Ashia Wilson

Hall 3 + Hall 2B #596
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Backtracking line search is foundational in numerical optimization. The basic idea is to adjust the step size of an algorithm by a {\em constant} factor until some chosen criterion (e.g. Armijo, Descent Lemma) is satisfied. We propose a novel way to adjust step sizes, replacing the constant factor used in regular backtracking with one that takes into account the degree to which the chosen criterion is violated, with no additional computational burden. This light-weight adjustment leads to significantly faster optimization, which we confirm by performing a variety of experiments on over fifteen real world datasets.For convex problems, we prove adaptive backtracking requires no more adjustments to produce a feasible step size than regular backtracking does.For nonconvex smooth problems, we prove adaptive backtracking enjoys the same guarantees of regular backtracking. %same lower bounds that step sizes produced by regular backtracking do.Furthermore, we prove adaptive backtracking preserves the convergence rates of gradient descent and its accelerated variant.

Live content is unavailable. Log in and register to view live content