Neural Stochastic Dual Dynamic Programming

Hanjun Dai · Yuan Xue · Zia Syed · Dale Schuurmans · Bo Dai

Keywords: [ learning to optimize ]

[ Abstract ]
[ Visit Poster at Spot C0 in Virtual World ] [ OpenReview
Tue 26 Apr 6:30 p.m. PDT — 8:30 p.m. PDT

Abstract: Stochastic dual dynamic programming (SDDP) is a state-of-the-art method for solving multi-stage stochastic optimization, widely used for modeling real-world process optimization tasks. Unfortunately, SDDP has a worst-case complexity that scales exponentially in the number of decision variables, which severely limits applicability to only low dimensional problems. To overcome this limitation, we extend SDDP by introducing a trainable neural model that learns to map problem instances to a piece-wise linear value function within intrinsic low-dimension space, which is architected specifically to interact with a base SDDP solver, so that can accelerate optimization performance on new instances. The proposed Neural Stochastic Dual Dynamic Programming ($$\nu$$-SDDP) continually self-improves by solving successive problems. An empirical investigation demonstrates that $$\nu$$-SDDP can significantly reduce problem solving cost without sacrificing solution quality over competitors such as SDDP and reinforcement learning algorithms, across a range of synthetic and real-world process optimization problems.

Chat is not available.