Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pitfalls of limited data and computation for Trustworthy ML

Identifying Incorrect Annotations in Multi-label Classification Data

Aditya Thyagarajan · Elias Snorrason · Curtis Northcutt · Jonas Mueller


Abstract:

In multi-label classification, each example in a dataset may be annotated as belonging to one or more classes (or none of the classes). Example applications include image (or document) tagging where each possible tag either applies to a particular image (or document) or not. With many possible classes to consider, data annotators are likely to make errors when labeling such data in practice. Here we consider algorithms for finding mislabeled examples in multi-label classification datasets. We propose an extension of the Confident Learning framework to this setting, as well as a label quality score that ranks examples with label errors much higher than those which are correctly labeled. Both approaches can utilize any trained classifier. Here we demonstrate that our methodology empirically outperforms many other algorithms for label error detection. Applying the method to CelebA reveals over 30,000 incorrectly tagged images in this dataset.

Chat is not available.