Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AI4DifferentialEquations In Science

The conjugate kernel for efficient training of physics-informed deep operator networks

Amanda Howard · Saad Qadeer · Andrew Engel · Adam Tsou · Max Vargas · Tony Chiang · Panos Stinis


Abstract:

Recent work has shown that the empirical Neural Tangent Kernel (NTK) can significantly improve the training of physics-informed Deep Operator Networks (DeepONets). The NTK, however, is costly to calculate, greatly increasing the cost of training such systems. In this paper, we study the performance of the empirical Conjugate Kernel (CK) for physics-informed DeepONets, an efficient approximation to the NTK that has been observed to yield similar results. For physics-informed DeepONets, we show that the CK performance is comparable to the NTK, while significantly reducing the time complexity for training DeepONets with the NTK.

Chat is not available.