In-Context Learning of Temporal Point Processes with Foundation Inference Models
Abstract
Modeling multi-type event sequences with marked temporal point processes (MTPPs) provides a principled framework for uncovering governing dynamical rules and predicting future events. Current neural approaches to MTPP inference typically require training separate, specialized models for each target system. We pursue a fundamentally different strategy: leveraging amortized inference and in-context learning, we pretrain a deep neural network to infer, in-context, the conditional intensity functions of event histories from a context consisting of sets of event sequences. Pretraining is performed on a large synthetic dataset of MTPPs sampled from a broad distribution over point processes. Once pretrained, our Foundation Inference Model for Point Processes (FIM-PP) can estimate MTPPs from real-world data without additional training, or be rapidly finetuned to specific target systems. Experiments show that FIM-PP matches the performance of specialized models on multi-event prediction across common benchmark datasets.