Skip to yearly menu bar Skip to main content


Workshop

Easing non-convex optimization with neural networks

David Lopez-Paz · Levent Sagun

East Meeting Level 8 + 15 #2

Wed 2 May, 4:30 p.m. PDT

Despite being non-convex, deep neural networks are surprisingly amenable to optimization by gradient descent. In this note, we use a deep neural network with $D$ parameters to parametrize the input space of a generic $d$-dimensional non-convex optimization problem. Our experiments show that minimizing the over-parametrized $D \gg d$ variables provided by the deep neural network eases and accelerates the optimization of various non-convex test functions.

Live content is unavailable. Log in and register to view live content