Long-tail learning via logit adjustment

Aditya Krishna Menon · Sadeep Jayasumana · Ankit Singh Rawat · Himanshu Jain · Andreas Veit · Sanjiv Kumar


Keywords: [ class imbalance ] [ long-tail learning ]

[ Abstract ]
[ Slides
[ Paper ]
Wed 5 May 9 a.m. PDT — 11 a.m. PDT
Spotlight presentation: Oral Session 5
Tue 4 May 11 a.m. PDT — 1:56 p.m. PDT


Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels have only a few associated samples. This poses a challenge for generalisation on such labels, and also makes naive learning biased towards dominant labels. In this paper, we present a statistical framework that unifies and generalises several recent proposals to cope with these challenges. Our framework revisits the classic idea of logit adjustment based on the label frequencies, which encourages a large relative margin between logits of rare positive versus dominant negative labels. This yields two techniques for long-tail learning, where such adjustment is either applied post-hoc to a trained model, or enforced in the loss during training. These techniques are statistically grounded, and practically effective on four real-world datasets with long-tailed label distributions.

Chat is not available.