How does information access affect LLM monitors' ability to detect sabotage?
Abstract
Frontier language model agents can exhibit misaligned behaviors, including deception, exploiting reward hacks, and pursuing hidden objectives. To control such agents, we can use LLMs themselves to monitor for misbehavior. In this paper, we study how information access affects LLM monitor performance and find that contemporary systems often perform better with less information, a phenomenon we call the less-is-more effect for automated oversight. We demonstrate this phenomenon, and analyze the conditions under which it occurs, in three evaluation environments where agents must conduct sabotage while evading monitors. Leveraging this effect, we introduce extract-and-evaluate (EaE) monitoring—a new hierarchical approach where one monitor isolates relevant excerpts from the monitored agent's trajectory and another monitor scores these excerpts. In BigCodeBench-Sabotage with a GPT-4.1-mini monitor, EaE improves sabotage detection rates by 16.8 percentage points over the next-best approach. In other settings, EaE either outperforms or is competitive with baselines. Additionally, we find that agents unaware of being monitored can be caught much more easily, that monitors scoring longer reasoning traces gain more from information filtering, and that monitor performance scales linearly with cost.