Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Setting up ML Evaluation Standards to Accelerate Progress

A Revealing Large-Scale Evaluation of Unsupervised Anomaly Detection Algorithms

Maxime Alvarez · Jean-Charles Verdier · DJeff Kanda Nkashama · Froduald Kabanza · Marc Frappier · Pierre Martin Tardif


Abstract:

Anomaly detection has many applications ranging from bank-fraud detection and cyber-threat detection to equipment maintenance and health monitoring. However, choosing a suitable algorithm for a given application remains a challenging design decision, often informed by the literature of anomaly detection algorithms. We extensively reviewed twelve of the most popular unsupervised anomaly detection methods. We observed that, so far, they have been compared using inconsistent protocols – the choice of the class of interest or the positive class, the split of training and test data, and choice of hyperparameters – leading to ambiguous evaluations. This observation led us to define a coherent evaluation protocol which we then used to produce an updated and more precise picture of the relative performance of the twelve methods on five widely used tabular datasets. While our evaluation cannot pinpoint a method that outperforms all the others on all datasets, it identifies those that stand out and revise misconceived knowledge about their relative performances.

Chat is not available.