Refine Now, Query Fast: A Decoupled Refinement Paradigm for Implicit Neural Fields
Tianyu Xiong · Skylar Wurster · Han Wei Shen
Abstract
Implicit Neural Representations (INRs) have emerged as powerful surrogates for large-scale scientific simulations, but their practical application is often hindered by a fundamental trade-off: high-fidelity MLP-based models are computationally expensive and slow to query, while fast embedding-based models lack expressive power. To resolve this, we propose the Decoupled Representation Refinement (DRR) paradigm. DRR leverages a deep refiner network in a one-time, offline process to encode rich representations into a compact and efficient embedding structure. This approach decouples slow neural networks with high representational capacity from the fast inference path. We introduce DRR-Net, a simple network that validates this paradigm, and a novel data augmentation strategy, Variational Pairs (VP) for improving INRs under complex tasks like high-dimensional surrogate modeling. Experiments on several ensemble simulation datasets demonstrate that our approach achieves state-of-the-art fidelity, while being up to 27$\times$ faster at inference than high-fidelity baselines and remaining competitive with the fastest models. The DRR paradigm offers an effective strategy for building powerful and practical neural field surrogates and general-purpose INRs, with a minimal compromise between speed and quality.
Successful Page Load