Landing with the Score: Riemannian Optimization through Denoising
Abstract
Under the \emph{data manifold hypothesis}, high-dimensional data concentrate near a low-dimensional manifold. We study Riemannian optimization when this manifold is only given implicitly through the data distribution, and standard geometric operations are unavailable. This formulation captures a broad class of data-driven design problems that are central to modern generative AI. Our key idea is a \emph{link function} that ties the data distribution to the geometric quantities needed for optimization: its gradient and Hessian recover the projection onto the manifold and its tangent space in the small-noise regime. This construction is directly connected to the score function in diffusion models, allowing us to leverage well-studied parameterizations, efficient training procedures, and even pretrained score networks from the diffusion model literature to perform optimization. On top of this foundation, we develop two {efficient} inference-time algorithms for optimization over data manifolds: \emph{Denoising Landing Flow} (DLF) and \emph{Denoising Riemannian Gradient Descent} (DRGD). We provide theoretical guarantees for approximate feasibility (manifold adherence) and optimality (small Riemannian gradient norm). We demonstrate the effectiveness of our approach on finite-horizon reference tracking tasks in data-driven control, illustrating their potential for practical generative and design applications.