Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Improving Convergence Guarantees of Random Subspace Second-order Algorithm for Nonconvex Optimization

Rei Higuchi · Pierre-Louis Poirion · Akiko Takeda

Hall 3 + Hall 2B #343
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract: In recent years, random subspace methods have been actively studied for large-dimensional nonconvex problems. Recent subspace methods have improved theoretical guarantees such as iteration complexity and local convergence rate while reducing computational costs by deriving descent directions in randomly selected low-dimensional subspaces. This paper proposes the Random Subspace Homogenized Trust Region (RSHTR) method with the best theoretical guarantees among random subspace algorithms for nonconvex optimization. RSHTR achieves an ε-approximate first-order stationary point in O(ε3/2) iterations, converging locally at a linear rate. Furthermore, under rank-deficient conditions, RSHTR satisfies ε-approximate second-order necessary conditions in O(ε3/2) iterations and exhibits a local quadratic convergence. Experiments on real-world datasets verify the benefits of RSHTR.

Live content is unavailable. Log in and register to view live content