Oral
in
Affinity Workshop: Tiny Papers Oral Session 4
Nonlinear model reduction for operator learning
Hamidreza Eivazi · Stefan Wittek · Andreas Rausch
Operator learning provides methods to approximate mappings between infinite-dimensional function spaces. Deep operator networks (DeepONets) are a notable architecture in this field. Recently, an extension of DeepONet based on model reduction and neural networks, proper orthogonal decomposition (POD)-DeepONet, has been able to outperform other architectures in terms of accuracy for several benchmark tests. We extend this idea towards nonlinear model order reduction by proposing an efficient framework that combines neural networks with kernel principal component analysis (KPCA) for operator learning. We conduct experiments on three test cases, including the Navier–Stokes equation. Our results demonstrate the superior performance of KPCA-DeepONet over POD-DeepONet.