Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Quantify Uncertainty and Hallucination in Foundation Models: The Next Frontier in Reliable AI

Uncertainty of Vision Medical Foundation Models

Haoxu Huang · Narges Razavian

Keywords: [ Medical Imaging ] [ Foundation Model ] [ Uncertainty ]


Abstract:

Accurate uncertainty estimation is essential for machine learning systems deployed in high-stakes domains such as medicine. Traditional approaches primarily rely on probability outputs from trained models (point predictions), which provide no formal guarantees on prediction coverage and often require additional calibration techniques to improve reliability. In contrast, conformal prediction (region prediction) offers a principled alternative by generating prediction sets with finite-sample validity guarantees, ensuring that the ground truth is contained within the set at a specified confidence level.In this study, we explore the impact of pre-training approach, dataset scale and domain on both point and region-level uncertainty quantification, by studying domain-specific vision medical foundation models vs. general domain vision foundation models. We conduct a comprehensive evaluation across foundation models trained on retinal, histopathological, and magnetic resonance imaging (MRI) data, applying various calibration techniques. Our results demonstrate that (1) pre-training on larger, higher-quality datasets along with self-supervised learning leads to better-calibrated point predictions, irrespective of whether the data source is domain-specific, (2) better point prediction calibration does not directly translate to improved region prediction performance, (3) standard re-calibration methods alone cannot fully mitigate uncertainty discrepancies across models trained on different data sources.These findings highlight the importance of careful model selection and the integration of both point and region prediction strategies to enhance the reliability and trustworthiness of medical AI systems. Our work underscores the need for a holistic approach to uncertainty quantification in recent development of medical vision foundation model, ensuring robust and interpretable AI-driven decision-making.

Chat is not available.