Stability-Aware Prompt Optimization for Clinical Data Abstraction
Abstract
Large language models used for clinical abstraction are sensitive to prompt wording, yet most work treats prompts as fixed and studies uncertainty in isolation. We study three chart-grounded tasks (MedAlign applicability/correctness and MS subtype abstraction) across open and proprietary models, measuring prompt sensitivity via flip rates and relating it to calibration and selective prediction. We find that higher accuracy does not guarantee prompt stability, and that models can appear well-calibrated yet remain fragile to paraphrases. We propose a dual-objective prompt optimization loop that jointly targets accuracy and stability, showing that explicitly including a stability term reduces flip rates across tasks and models, sometimes at modest accuracy cost. Our results suggest prompt sensitivity should be an explicit objective when validating clinical LLM systems.