## Half-Inverse Gradients for Physical Deep Learning

### Patrick Schnell · Philipp Holl · Nils Thuerey

Keywords: [ Physical Simulation ] [ optimization ] [ partial differential equations ]

[ Abstract ]
Wed 27 Apr 6:30 p.m. PDT — 8:30 p.m. PDT

Spotlight presentation:

Abstract:

Recent works in deep learning have shown that integrating differentiable physics simulators into the training process can greatly improve the quality of results. Although this combination represents a more complex optimization task than usual neural network training, the same gradient-based optimizers are used to minimize the loss function. However, the integrated physics solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes. Consequently, the gradient flow is often highly unbalanced and creates an environment in which existing gradient-based optimizers perform poorly. In this work, we analyze the characteristics of both physical and neural network optimizations separately to derive a new method based on a half-inversion of the Jacobian. Our approach combines principles of both classical network and physics optimizers to solve the combined optimization task. Compared to state-of-the-art neural network optimizers, our method converges more quickly and to better solutions, which we demonstrate on three complex learning problems involving nonlinear oscillators, the Schroedinger equation and the Poisson problem.

Chat is not available.