Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Improving Deep Regression with Ordinal Entropy

Shihao Zhang · Linlin Yang · Michael Bi Mi · Xiaoxu Zheng · Angela Yao

MH1-2-3-4 #58

Keywords: [ Deep Learning and representational learning ] [ regression ] [ classification ] [ depth estimation ] [ age estimation ] [ entropy ] [ counting ]


Abstract:

In computer vision, it is often observed that formulating regression problems as a classification task yields better performance. We investigate this curious phenomenon and provide a derivation to show that classification, with the cross-entropy loss, outperforms regression with a mean squared error loss in its ability to learn high-entropy feature representations. Based on the analysis, we propose an ordinal entropy loss to encourage higher-entropy feature spaces while maintaining ordinal relationships to improve the performance of regression tasks. Experiments on synthetic and real-world regression tasks demonstrate the importance and benefits of increasing entropy for regression.

Chat is not available.