Agentic Uncertainty Reveals Agentic Overconfidence
Jean Kaddour ⋅ Srijan Patel ⋅ Gbetondji Dovonon ⋅ Leo Richter ⋅ Pasquale Minervini ⋅ Matt Kusner
Abstract
Can AI agents predict whether they will succeed at a task? We study agentic uncertainty by eliciting success probability estimates before, during, and after task execution. All results exhibit agentic overconfidence: some agents that succeed only 22\% of the time predict 77\% success. Counterintuitively, pre-execution assessment with less information tends to yield better discrimination than standard post-execution review, though differences are not always significant. Adversarial prompting reframing assessment as bug-finding achieves the best calibration.
Successful Page Load