Poster
in
Workshop: Workshop on Spurious Correlation and Shortcut Learning: Foundations and Solutions
On the Unreasonable Effectiveness of Last-Layer Retraining
John Hill · Tyler LaBonte · Xinchen Zhang · Vidya Muthukumar
Keywords: [ last-layer retraining ] [ class balancing ] [ group robustness ] [ neural collapse ] [ spurious correlations ]
Last-layer retraining (LLR) methods — wherein the last layer of a neural network is reinitialized and retrained on a held-out set following ERM training — have recently garnered interest as an efficient approach to rectify dependence on spurious correlations and improve performance on minority groups. Surprisingly, LLR has recently been found to improve worst-group accuracy even when the held-out set is an imbalanced subset of the training set. We initially hypothesize that this “unreasonable effectiveness” of LLR is explained by its ability to mitigate neural collapse through the held-out set, resulting in the implicit bias of gradient descent benefiting robustness. Our empirical investigation does not support this hypothesis. Instead, we present strong evidence for an alternative hypothesis: that the success of LLR is primarily due to better group balance in the held-out set. We conclude by showing how the recent algorithms CB-LLR and AFR perform implicit group-balancing to elicit a robustness improvement.