Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Pitfalls of limited data and computation for Trustworthy ML

Label Calibration for Semantic Segmentation Under Domain Shift

Ondrej Bohdal · Da Li · Timothy Hospedales


Abstract:

Performance of a pre-trained semantic segmentation model is likely to substantially decrease on data from a new domain. We show a pre-trained model can be adapted to unlabelled target domain data by calculating soft-label prototypes under the domain shift and making predictions according to the prototype closest to the vector with predicted class probabilities. The proposed adaptation procedure is fast, comes almost for free in terms of computational resources and leads to considerable performance improvements. We demonstrate the benefits of such label calibration on the highly-practical synthetic-to-real semantic segmentation problem.

Chat is not available.