Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI4DifferentialEquations In Science

Comparing and Contrasting Deep Learning Weather Prediction Backbones on Navier-Stokes Dynamics

Matthias Karlbauer · Danielle Maddix · Abdul Fatir Ansari · Boran Han · Gaurav Gupta · Bernie Wang · Andrew Stuart · Michael W Mahoney


Abstract:

There has been remarkable progress in the development of Deep Learning Weather Prediction (DLWP) models, so much so that they are poised to become competitive with traditional numerical weather prediction (NWP) models. Indeed, a wide number of DLWP architectures---based on various backbones, including U-Net, Transformer, Graph Neural Network (GNN), or Fourier Neural Operator (FNO)---have demonstrated their potential at forecasting atmospheric states. However, due to differences in training protocols, data choices (resolution, selected prognostic variables, or additional forcing inputs), and forecast horizons, it still remains unclear which of these methods and architectures is most suitable for weather forecasting. Here, we provide a detailed empirical analysis, under controlled conditions, comparing and contrasting the most prominent backbones used in DLWP models. This is done by predicting two-dimensional incompressible Navier-Stokes dynamics with different numbers of parameters and Reynolds number values. In terms of accuracy, memory consumption, and runtime, our results illustrate various tradeoffs, and they show favorable performance of FNO, in comparison with Transformer, U-Net, and GNN backbones.

Chat is not available.