Skip to yearly menu bar Skip to main content


Poster

Do LLMs know'' internally when they follow instructions?

Juyeon Heo · Christina Heinze-Deml · Oussama Elachqar · Kwan Ho Ryan Chan · Shirley Ren · Andrew Miller · Udhyakumar Nallasamy · Jaya Narain

Hall 3 + Hall 2B #534
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Instruction-following is crucial for building AI agents with large language models (LLMs), as these models must adhere strictly to user-provided constraints and guidelines. However, LLMs often fail to follow even simple and clear instructions.To improve instruction-following behavior and prevent undesirable outputs, a deeper understanding of how LLMs' internal states relate to these outcomes is required.In this work, we investigate whether LLMs encode information in their representations that correlates with instruction-following success—a property we term knowing internally''.Our analysis identifies a direction in the input embedding space, termed the instruction-following dimension, that predicts whether a response will comply with a given instruction.We find that this dimension generalizes well across unseen tasks but not across unseen instruction types.We demonstrate that modifying representations along this dimension improves instruction-following success rates compared to random changes, without compromising response quality.Further investigation reveals that this dimension is more closely related to the phrasing of prompts rather than the inherent difficulty of the task or instructions. This discovery also suggests explanations for why LLMs sometimes fail to follow clear instructions and why prompt engineering is often effective, even when the content remains largely unchanged. This work provides insight into the internal workings of LLMs' instruction-following, paving the way for reliable LLM agents.

Live content is unavailable. Log in and register to view live content