Skip to yearly menu bar Skip to main content


Poster

On the Almost Sure Convergence of the Stochastic Three Points Algorithm

Taha EL BAKKALI EL KADI · Omar Saadi

Hall 3 + Hall 2B #368
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: The stochastic three points (STP) algorithm is a derivative-free optimization technique designed for unconstrained optimization problems in Rd. In this paper, we analyze this algorithm for three classes of functions: smooth functions that may lack convexity, smooth convex functions, and smooth functions that are strongly convex. Our work provides the first almost sure convergence results of the STP algorithm, alongside some convergence results in expectation.For the class of smooth functions, we establish that the best gradient iterate of the STP algorithm converges almost surely to zero at a rate of o(1/T12ϵ) for any ϵ(0,12), where T is the number of iterations. Furthermore, within the same class of functions, we establish both almost sure convergence and convergence in expectation of the final gradient iterate towards zero.For the class of smooth convex functions, we establish that f(θT) converges to infθRdf(θ) almost surely at a rate of o(1/T1ϵ) for any ϵ(0,1), and in expectation at a rate of O(dT) where d is the dimension of the space.Finally, for the class of smooth functions that are strongly convex, we establish that when step sizes are obtained by approximating the directional derivatives of the function, f(θT) converges to infθRdf(θ) in expectation at a rate of O((1μdL)T), and almost surely at a rate of o((1sμdL)T) for any s(0,1), where μ and Lare the strong convexity and smoothness parameters of the function.

Live content is unavailable. Log in and register to view live content