Skip to yearly menu bar Skip to main content


Enhancing Contrastive Learning for Ordinal Regression via Ordinal Content Preserved Data Augmentation

Jiyang Zheng · Yu Yao · Bo Han · Dadong Wang · Tongliang Liu

Halle B #195
[ ]
Thu 9 May 1:45 a.m. PDT — 3:45 a.m. PDT


Contrastive learning, while highly effective for a lot of tasks, shows limited improvement in ordinal regression. We find that the limitation comes from the predefined strong data augmentations employed in contrastive learning. Intuitively, for ordinal regression datasets, the discriminative information (ordinal content information) contained in instances is subtle. The strong augmentations can easily overshadow or diminish this ordinal content information. As a result, when contrastive learning is used to extract common features between weakly and strongly augmented images, the derived features often lack this essential ordinal content, rendering them less useful in training models for ordinal regression. To improve contrastive learning's utility for ordinal regression, we propose a novel augmentation method to replace the predefined strong argumentation based on the principle of minimal change. Our method is designed in a generative manner that can effectively generate images with different styles but contains desired ordinal content information. Extensive experiments validate the effectiveness of our proposed method, which serves as a plug-and-play solution and consistently improves the performance of existing state-of-the-art methods in ordinal regression tasks.

Chat is not available.