In-Person Oral presentation / top 25% paper

Distilling Model Failures as Directions in Latent Space

Saachi Jain · Hannah Lawrence · Ankur Moitra · Aleksander Madry

[ Abstract ] [ Livestream: Visit Oral 5 Track 3: Deep Learning and representational learning ]
Wed 3 May 1:30 a.m. — 1:40 a.m. PDT

Existing methods for isolating hard subpopulations and spurious correlations in datasets often require human intervention. This can make these methods labor-intensive and dataset-specific. To address these shortcomings, we present a scalable method for automatically distilling a model's failure modes. Specifically, we harness linear classifiers to identify consistent error patterns, and, in turn, induce a natural representation of these failure modes as directions within the feature space. We demonstrate that this framework allows us to discover and automatically caption challenging subpopulations within the training dataset. Moreover, by combining our framework with off-the-shelf diffusion models, we can generate images that are especially challenging for the analyzed model, and thus can be used to perform synthetic data augmentation that helps remedy the model's failure modes.

Chat is not available.