Skip to yearly menu bar Skip to main content


In-Person Oral presentation / top 25% paper

Unsupervised Model Selection for Time Series Anomaly Detection

Mononito Goswami · Cristian Challu · Laurent Callot · Lenon Minorics · Andrey Kan

AD4

Abstract: Anomaly detection in time-series has a wide range of practical applications. While numerous anomaly detection methods have been proposed in the literature, a recent survey concluded that no single method is the most accurate across various datasets. To make matters worse, anomaly labels are scarce and rarely available in practice. The practical problem of selecting the most accurate model for a given dataset without labels has received little attention in the literature. This paper answers this question \textit{i.e.} Given an unlabeled dataset and a set of candidate anomaly detectors, how can we select the most accurate model? To this end, we identify three classes of surrogate (unsupervised) metrics, namely, \textit{prediction error}, \textit{model centrality}, and \textit{performance on injected synthetic anomalies}, and show that some metrics are highly correlated with standard supervised anomaly detection performance metrics such as the $F_1$ score, but to varying degrees. We formulate metric combination with multiple imperfect surrogate metrics as a robust rank aggregation problem. We then provide theoretical justification behind the proposed approach. Large-scale experiments on multiple real-world datasets demonstrate that our proposed unsupervised approach is as effective as selecting the most accurate model based on partially labeled data.

Chat is not available.